
Hi Grygori,
El 22/02/2018 a las 20:50, Grygorii Strashko escribió:
Hi
I'd appreciated if you can clarify few points below.
On 02/21/2018 10:10 AM, Álvaro Fernández Rojas wrote:
BCM6348 IUDMA controller is present on multiple BMIPS (BCM63xx) SoCs.
Signed-off-by: Álvaro Fernández Rojas noltari@gmail.com
v3: no changes v2: Fix dma rx burst config and select DMA_CHANNELS.
drivers/dma/Kconfig | 9 + drivers/dma/Makefile | 1 + drivers/dma/bcm6348-iudma.c | 505 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 515 insertions(+) create mode 100644 drivers/dma/bcm6348-iudma.c
[...]
+static int bcm6348_iudma_enable(struct dma *dma) +{
- struct bcm6348_iudma_priv *priv = dev_get_priv(dma->dev);
- struct bcm6348_chan_priv *ch_priv = priv->ch_priv[dma->id];
- struct bcm6348_dma_desc *dma_desc;
- uint8_t i;
- /* init dma rings */
- dma_desc = ch_priv->dma_ring;
- for (i = 0; i < ch_priv->dma_ring_size; i++) {
if (bcm6348_iudma_chan_is_rx(dma->id)) {
dma_desc->status = DMAD_ST_OWN_MASK;
dma_desc->length = PKTSIZE_ALIGN;
dma_desc->address = virt_to_phys(net_rx_packets[i]);
You are filling RX queue/ring with buffers defined in Net core. Does it mean that this DMA driver will not be usable for other purposes, as Net can be compiled out?
As far as I know, and depending on the specific SoC, BCM63xx IUDMA is used for Ethernet, USB (device only) and xDSL. So yes, in u-boot it will be used for ethernet only. BTW, my first attempt didn't use net_rx_packets, but I saw that in pic32_eth implementation and dropped the dma specific buffers. I will add them again ;).
Wouldn't it be reasonable to have some sort of .receive_prepare() callback in DMA dma_ops, so DMA user can control which buffers to push in RX DMA channel? And it also can be used in eth_ops.free_pkt() callback (see below).
Yes, probably, but maybe we can achieve that without adding another call.
} else {
dma_desc->status = 0;
dma_desc->length = 0;
dma_desc->address = 0;
}
if (i == ch_priv->dma_ring_size - 1)
dma_desc->status |= DMAD_ST_WRAP_MASK;
if (bcm6348_iudma_chan_is_rx(dma->id))
writel_be(1,
priv->base + DMA_FLOWC_ALLOC_REG(dma->id));
dma_desc++;
- }
- /* init to first descriptor */
- ch_priv->desc_id = 0;
- /* force cache writeback */
- flush_dcache_range((ulong)ch_priv->dma_ring,
ALIGN_END_ADDR(struct bcm6348_dma_desc, ch_priv->dma_ring,
ch_priv->dma_ring_size));
- /* clear sram */
- writel_be(0, priv->sram + DMAS_STATE_DATA_REG(dma->id));
- writel_be(0, priv->sram + DMAS_DESC_LEN_STATUS_REG(dma->id));
- writel_be(0, priv->sram + DMAS_DESC_BASE_BUFPTR_REG(dma->id));
- /* set dma ring start */
- writel_be(virt_to_phys(ch_priv->dma_ring),
priv->sram + DMAS_RSTART_REG(dma->id));
- /* set flow control */
- if (bcm6348_iudma_chan_is_rx(dma->id)) {
u32 val;
setbits_be32(priv->base + DMA_CFG_REG,
DMA_CFG_FLOWC_ENABLE(dma->id));
val = ch_priv->dma_ring_size / 3;
writel_be(val, priv->base + DMA_FLOWC_THR_LO_REG(dma->id));
val = (ch_priv->dma_ring_size * 2) / 3;
writel_be(val, priv->base + DMA_FLOWC_THR_HI_REG(dma->id));
- }
- /* set dma max burst */
- writel_be(ch_priv->dma_ring_size,
priv->chan + DMAC_BURST_REG(dma->id));
- /* clear interrupts */
- writel_be(DMAC_IR_DONE_MASK, priv->chan + DMAC_IR_ST_REG(dma->id));
- writel_be(0, priv->chan + DMAC_IR_EN_REG(dma->id));
- setbits_be32(priv->chan + DMAC_CFG_REG(dma->id), DMAC_CFG_ENABLE_MASK);
- return 0;
+}
[...]
+static int bcm6348_iudma_receive(struct dma *dma, void **dst) +{
- struct bcm6348_iudma_priv *priv = dev_get_priv(dma->dev);
- struct bcm6348_chan_priv *ch_priv = priv->ch_priv[dma->id];
- struct bcm6348_dma_desc *dma_desc;
- void __iomem *dma_buff;
- uint16_t status;
- int ret;
- /* get dma ring descriptor address */
- dma_desc = ch_priv->dma_ring;
- dma_desc += ch_priv->desc_id;
- /* invalidate cache data */
- invalidate_dcache_range((ulong)dma_desc,
ALIGN_END_ADDR(struct bcm6348_dma_desc, dma_desc, 1));
- /* check dma own */
- if (dma_desc->status & DMAD_ST_OWN_MASK)
return 0;
- /* check dma end */
- if (!(dma_desc->status & DMAD_ST_EOP_MASK))
return -EINVAL;
- /* get dma buff descriptor address */
- dma_buff = phys_to_virt(dma_desc->address);
- /* invalidate cache data */
- invalidate_dcache_range((ulong)dma_buff,
(ulong)(dma_buff + PKTSIZE_ALIGN));
- /* get dma data */
- *dst = dma_buff;
- ret = dma_desc->length;
- /* reinit dma descriptor */
- status = dma_desc->status & DMAD_ST_WRAP_MASK;
- status |= DMAD_ST_OWN_MASK;
- dma_desc->length = PKTSIZE_ALIGN;
- dma_desc->status = status;
- /* flush cache */
- flush_dcache_range((ulong)dma_desc,
ALIGN_END_ADDR(struct bcm6348_dma_desc, dma_desc, 1));
Could you clarify pls, if you do return dma_desc to RX ring here or not?
Yes.
if yes, wouldn't it cause potential problem on Net RX path ret = eth_get_ops(current)->recv(current, flags, &packet); ^^ (1) here buffer will be received from DMA ( and pushed back to RX ring ?? )
flags = 0; if (ret > 0) net_process_received_packet(packet, ret);
^^ (2) here it will be passed in Net stack
if (ret >= 0 && eth_get_ops(current)->free_pkt) eth_get_ops(current)->free_pkt(current, packet, ret);
^^ at this point it should be safe to return buffer in DMA RX ring.
if (ret <= 0) break;
Can DMA overwrite packet after point (1) while packet is still processed (2)?
I don't think so, because as far as I know u-boot is not processing more than one packet at once, is it? But yeah, I see your point and if it does process more than one packet at once this is not the proper way to do that. I will use free_pkt in next version and lock the packet until it's processed.
- /* set flow control buffer alloc */
- writel_be(1, priv->base + DMA_FLOWC_ALLOC_REG(dma->id));
- /* enable dma */
- setbits_be32(priv->chan + DMAC_CFG_REG(dma->id), DMAC_CFG_ENABLE_MASK);
- /* set interrupt */
- writel_be(DMAC_IR_DONE_MASK, priv->chan + DMAC_IR_EN_REG(dma->id));
- /* increment dma descriptor */
- ch_priv->desc_id = (ch_priv->desc_id + 1) % ch_priv->dma_ring_size;
- return ret - 4;
+}
Regards, Álvaro.