[U-Boot] [RFC 00/22] Add support for Cavium Octeon-TX CN80XX/CN81XX

The Cavium Octeon-TX 64-bit ARM based SoCs include the CN80XX and CN81XX. These SoC's have peripheral drivers based on PCI ECAM.
This series has been tested on the the following Gateworks SBC's: - GW6100 - CN8021 dual A53 800MHz - GW6104 - CN8031 quad A53 1500MHz - GW6300 - CN8020 dual A53 800MHz - GW6304 - CN8030 quad A53 1500MHz
These patches all came originally from Cavium's SDK based on a 2017 version of U-Boot and have been refactored a bit to strip out things that didn't make sense that I saw in the original code.
These patches apply on top of U-Boot 2018.11 and if needed I can do minor cleanup and rebase on top of master but I can't be the maintainer.
I'm also looking for comments on the PCI related patches as I am by far not a PCI expert and am not sure what I've done here is correct for this SoC and the others some of the changes would affect.
I'm looking for a maintainer for this - I only have boards with the above SoC's and can't test anything else. I also don't have the resources to be a maintainer of this arch - I'm looking at you Cavium/Marvell as your SDK's use old forks of U-Boot with a slew of patches and there is no reason to not mainline this code. I've added all the cavium/marvell contacts I could find from previous threads about OcteonTX support in U-Boot.
The way I'm using this on the above boards is to build u-boot-nodtb.bin and package it into a FIP image that the ATF loads. I can document this process if needed.
You can find this series on github [1]
Best Regards,
Tim
[1] https://github.com/Gateworks/uboot-newport/tree/v2018.11-newport-rc1
Tim Harvey (22): arm: introduce ARCH_THUNDERX arm: add thunderx_81xx thunderx: add FDT support thunderx: add thunderx register definitions and misc functions thunderx: move DRAM prints to debug dm: pci: add PCI SR-IOV EA support fdt: add fdtdec_get_pci_bus_range pci: add thunderx pci/ecam driver pci: fix pce enumeration on thunderx arm: include 64bit io accessors gpio: add thunderx gpio driver i2c: add thunderx I2C driver spi: add thunderx SPI driver xhci: add support for cavium thunderx XHCI thunderx_81xx: add support for XHCI thunderx_81xx: enable usb mass storage and usb ethernet ahci: support 64bit systems ahci: set n_ports from host caps ahci: add support for ThunderX AHCI thunderx_81xx: add AHCI support net: add thunderx vnic drivers pci: auto probe thunderx NIC devices
arch/arm/Kconfig | 6 +- arch/arm/Makefile | 1 + arch/arm/dts/Makefile | 3 +- arch/arm/dts/thunderx-81xx.dts | 36 + arch/arm/dts/thunderx-81xx.dtsi | 440 +++++ .../arm/include/asm/arch-thunderx}/atf.h | 2 +- .../arm/include/asm/arch-thunderx}/atf_part.h | 0 arch/arm/include/asm/arch-thunderx/fdt.h | 12 + arch/arm/include/asm/arch-thunderx/thunderx.h | 300 ++++ .../include/asm/arch-thunderx}/thunderx_svc.h | 0 .../include/asm/arch-thunderx/thunderx_vnic.h | 15 + arch/arm/include/asm/gpio.h | 2 +- arch/arm/include/asm/io.h | 8 + arch/arm/mach-thunderx/Makefile | 2 + .../thunderx => arch/arm/mach-thunderx}/atf.c | 6 +- arch/arm/mach-thunderx/fdt.c | 218 +++ arch/arm/mach-thunderx/lowlevel_init.S | 31 + arch/arm/mach-thunderx/misc.c | 33 + arch/x86/cpu/baytrail/cpu.c | 3 +- board/cavium/thunderx/Kconfig | 23 +- board/cavium/thunderx/Makefile | 2 +- board/cavium/thunderx/thunderx.c | 60 +- configs/thunderx_81xx_defconfig | 65 + configs/thunderx_88xx_defconfig | 9 +- drivers/ata/ahci.c | 46 +- drivers/gpio/Kconfig | 7 + drivers/gpio/Makefile | 1 + drivers/gpio/thunderx_gpio.c | 189 ++ drivers/i2c/Kconfig | 7 + drivers/i2c/Makefile | 1 + drivers/i2c/designware_i2c.c | 4 +- drivers/i2c/intel_i2c.c | 3 +- drivers/i2c/thunderx_i2c.c | 878 ++++++++++ drivers/mmc/pci_mmc.c | 3 +- drivers/net/Kconfig | 34 + drivers/net/Makefile | 1 + drivers/net/cavium/Makefile | 8 + drivers/net/cavium/nic.h | 569 ++++++ drivers/net/cavium/nic_main.c | 795 +++++++++ drivers/net/cavium/nic_reg.h | 228 +++ drivers/net/cavium/nicvf_main.c | 553 ++++++ drivers/net/cavium/nicvf_queues.c | 1123 ++++++++++++ drivers/net/cavium/nicvf_queues.h | 364 ++++ drivers/net/cavium/q_struct.h | 692 ++++++++ drivers/net/cavium/thunder_bgx.c | 1529 +++++++++++++++++ drivers/net/cavium/thunder_bgx.h | 259 +++ drivers/net/cavium/thunder_xcv.c | 190 ++ drivers/net/cavium/thunderx_smi.c | 388 +++++ drivers/net/e1000.c | 5 +- drivers/net/pch_gbe.c | 3 +- drivers/nvme/nvme.c | 3 +- drivers/pci/Kconfig | 9 + drivers/pci/Makefile | 1 + drivers/pci/pci-uclass.c | 316 +++- drivers/pci/pci_auto.c | 18 + drivers/pci/pci_thunderx.c | 160 ++ drivers/spi/Kconfig | 6 + drivers/spi/Makefile | 1 + drivers/spi/thunderx_spi.c | 448 +++++ drivers/usb/host/ehci-pci.c | 3 +- drivers/usb/host/xhci-pci.c | 10 +- include/ahci.h | 3 + include/configs/thunderx_81xx.h | 82 + include/fdtdec.h | 11 + include/pci.h | 58 +- include/pci_ids.h | 15 + lib/fdtdec.c | 13 + net/eth_legacy.c | 3 + 68 files changed, 10248 insertions(+), 69 deletions(-) create mode 100644 arch/arm/dts/thunderx-81xx.dts create mode 100644 arch/arm/dts/thunderx-81xx.dtsi rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf.h (96%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf_part.h (100%) create mode 100644 arch/arm/include/asm/arch-thunderx/fdt.h create mode 100644 arch/arm/include/asm/arch-thunderx/thunderx.h rename {include/cavium => arch/arm/include/asm/arch-thunderx}/thunderx_svc.h (100%) create mode 100644 arch/arm/include/asm/arch-thunderx/thunderx_vnic.h create mode 100644 arch/arm/mach-thunderx/Makefile rename {board/cavium/thunderx => arch/arm/mach-thunderx}/atf.c (98%) create mode 100644 arch/arm/mach-thunderx/fdt.c create mode 100644 arch/arm/mach-thunderx/lowlevel_init.S create mode 100644 arch/arm/mach-thunderx/misc.c create mode 100644 configs/thunderx_81xx_defconfig create mode 100644 drivers/gpio/thunderx_gpio.c create mode 100644 drivers/i2c/thunderx_i2c.c create mode 100644 drivers/net/cavium/Makefile create mode 100644 drivers/net/cavium/nic.h create mode 100644 drivers/net/cavium/nic_main.c create mode 100644 drivers/net/cavium/nic_reg.h create mode 100644 drivers/net/cavium/nicvf_main.c create mode 100644 drivers/net/cavium/nicvf_queues.c create mode 100644 drivers/net/cavium/nicvf_queues.h create mode 100644 drivers/net/cavium/q_struct.h create mode 100644 drivers/net/cavium/thunder_bgx.c create mode 100644 drivers/net/cavium/thunder_bgx.h create mode 100644 drivers/net/cavium/thunder_xcv.c create mode 100644 drivers/net/cavium/thunderx_smi.c create mode 100644 drivers/pci/pci_thunderx.c create mode 100755 drivers/spi/thunderx_spi.c create mode 100644 include/configs/thunderx_81xx.h

Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/Kconfig | 6 +++--- arch/arm/Makefile | 1 + arch/arm/dts/Makefile | 2 +- .../arm/include/asm/arch-thunderx}/atf.h | 2 +- .../arm/include/asm/arch-thunderx}/atf_part.h | 0 .../arm/include/asm/arch-thunderx}/thunderx_svc.h | 0 arch/arm/include/asm/gpio.h | 2 +- arch/arm/mach-thunderx/Makefile | 2 ++ .../thunderx => arch/arm/mach-thunderx}/atf.c | 6 +++--- board/cavium/thunderx/Kconfig | 15 ++++++++++++--- board/cavium/thunderx/Makefile | 2 +- board/cavium/thunderx/thunderx.c | 2 +- configs/thunderx_88xx_defconfig | 3 ++- 13 files changed, 28 insertions(+), 15 deletions(-) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf.h (96%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf_part.h (100%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/thunderx_svc.h (100%) create mode 100644 arch/arm/mach-thunderx/Makefile rename {board/cavium/thunderx => arch/arm/mach-thunderx}/atf.c (98%)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1f3fa1575a..9f6f5a41da 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -393,7 +393,7 @@ config SPL_USE_ARCH_MEMSET
config ARM64_SUPPORT_AARCH32 bool "ARM64 system support AArch32 execution state" - default y if ARM64 && !TARGET_THUNDERX_88XX + default y if ARM64 && !ARCH_THUNDERX help This ARM64 system supports AArch32 execution state.
@@ -1359,8 +1359,8 @@ config ARCH_ROCKCHIP imply TPL_SYSRESET imply USB_FUNCTION_FASTBOOT
-config TARGET_THUNDERX_88XX - bool "Support ThunderX 88xx" +config ARCH_THUNDERX + bool "Support ThunderX" select ARM64 select OF_CONTROL select PL01X_SERIAL diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 4b6c5e1935..f58e2cd29c 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -79,6 +79,7 @@ machine-$(CONFIG_STM32) += stm32 machine-$(CONFIG_ARCH_STM32MP) += stm32mp machine-$(CONFIG_TEGRA) += tegra machine-$(CONFIG_ARCH_UNIPHIER) += uniphier +machine-$(CONFIG_ARCH_THUNDERX) += thunderx machine-$(CONFIG_ARCH_ZYNQ) += zynq machine-$(CONFIG_ARCH_VERSAL) += versal machine-$(CONFIG_ARCH_ZYNQMP_R5) += zynqmp-r5 diff --git a/arch/arm/dts/Makefile b/arch/arm/dts/Makefile index d36447d18d..87ccd96596 100644 --- a/arch/arm/dts/Makefile +++ b/arch/arm/dts/Makefile @@ -192,7 +192,7 @@ dtb-$(CONFIG_AM43XX) += am437x-gp-evm.dtb am437x-sk-evm.dtb \ am437x-idk-evm.dtb \ am4372-generic.dtb dtb-$(CONFIG_TI816X) += dm8168-evm.dtb -dtb-$(CONFIG_THUNDERX) += thunderx-88xx.dtb +dtb-$(CONFIG_THUNDERX_88XX) += thunderx-88xx.dtb
dtb-$(CONFIG_ARCH_SOCFPGA) += \ socfpga_arria5_socdk.dtb \ diff --git a/include/cavium/atf.h b/arch/arm/include/asm/arch-thunderx/atf.h similarity index 96% rename from include/cavium/atf.h rename to arch/arm/include/asm/arch-thunderx/atf.h index 3cf05c43d7..cda42d6140 100644 --- a/include/cavium/atf.h +++ b/arch/arm/include/asm/arch-thunderx/atf.h @@ -4,7 +4,7 @@ **/ #ifndef __ATF_H__ #define __ATF_H__ -#include <cavium/atf_part.h> +#include "atf_part.h"
ssize_t atf_read_mmc(uintptr_t offset, void *buffer, size_t size); ssize_t atf_read_nor(uintptr_t offset, void *buffer, size_t size); diff --git a/include/cavium/atf_part.h b/arch/arm/include/asm/arch-thunderx/atf_part.h similarity index 100% rename from include/cavium/atf_part.h rename to arch/arm/include/asm/arch-thunderx/atf_part.h diff --git a/include/cavium/thunderx_svc.h b/arch/arm/include/asm/arch-thunderx/thunderx_svc.h similarity index 100% rename from include/cavium/thunderx_svc.h rename to arch/arm/include/asm/arch-thunderx/thunderx_svc.h diff --git a/arch/arm/include/asm/gpio.h b/arch/arm/include/asm/gpio.h index 3039e66bf9..420f253c1b 100644 --- a/arch/arm/include/asm/gpio.h +++ b/arch/arm/include/asm/gpio.h @@ -1,5 +1,5 @@ #if !defined(CONFIG_ARCH_UNIPHIER) && !defined(CONFIG_ARCH_STI) && \ - !defined(CONFIG_ARCH_K3) + !defined(CONFIG_ARCH_K3) && !defined(CONFIG_ARCH_THUNDERX) #include <asm/arch/gpio.h> #endif #include <asm-generic/gpio.h> diff --git a/arch/arm/mach-thunderx/Makefile b/arch/arm/mach-thunderx/Makefile new file mode 100644 index 0000000000..34b6ecc2f9 --- /dev/null +++ b/arch/arm/mach-thunderx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0+ +obj-y := atf.o diff --git a/board/cavium/thunderx/atf.c b/arch/arm/mach-thunderx/atf.c similarity index 98% rename from board/cavium/thunderx/atf.c rename to arch/arm/mach-thunderx/atf.c index 2e7ba69d76..cebd84b0e1 100644 --- a/board/cavium/thunderx/atf.c +++ b/arch/arm/mach-thunderx/atf.c @@ -7,9 +7,9 @@ #include <asm/io.h>
#include <asm/system.h> -#include <cavium/thunderx_svc.h> -#include <cavium/atf.h> -#include <cavium/atf_part.h> +#include <asm/arch-thunderx/thunderx_svc.h> +#include <asm/arch-thunderx/atf.h> +#include <asm/arch-thunderx/atf_part.h>
#include <asm/psci.h>
diff --git a/board/cavium/thunderx/Kconfig b/board/cavium/thunderx/Kconfig index 927d8765d6..5fa367ac35 100644 --- a/board/cavium/thunderx/Kconfig +++ b/board/cavium/thunderx/Kconfig @@ -1,4 +1,4 @@ -if TARGET_THUNDERX_88XX +if ARCH_THUNDERX
config SYS_CPU string @@ -10,11 +10,20 @@ config SYS_BOARD
config SYS_VENDOR string - default "cavium" + default "cavium" + +choice + prompt "ThunderX board select" + optional + +config THUNDERX_88XX + bool "ThunderX 88xx family" + +endchoice
config SYS_CONFIG_NAME string - default "thunderx_88xx" + default "thunderx_88xx" if THUNDERX_88XX
config CMD_ATF bool "Enable ATF query commands" diff --git a/board/cavium/thunderx/Makefile b/board/cavium/thunderx/Makefile index 4088c7678d..6da877fa60 100644 --- a/board/cavium/thunderx/Makefile +++ b/board/cavium/thunderx/Makefile @@ -3,4 +3,4 @@ # #
-obj-y := thunderx.o atf.o +obj-y := thunderx.o diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index cf55b633c3..2b80dc56f1 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -9,7 +9,7 @@ #include <errno.h> #include <linux/compiler.h>
-#include <cavium/atf.h> +#include <asm/arch-thunderx/atf.h> #include <asm/armv8/mmu.h>
#if !CONFIG_IS_ENABLED(OF_CONTROL) diff --git a/configs/thunderx_88xx_defconfig b/configs/thunderx_88xx_defconfig index b00179a9c8..fe4643f52e 100644 --- a/configs/thunderx_88xx_defconfig +++ b/configs/thunderx_88xx_defconfig @@ -1,8 +1,9 @@ CONFIG_ARM=y -CONFIG_TARGET_THUNDERX_88XX=y +CONFIG_ARCH_THUNDERX=y CONFIG_SYS_TEXT_BASE=0x00500000 CONFIG_DEBUG_UART_BASE=0x87e024000000 CONFIG_DEBUG_UART_CLOCK=24000000 +CONFIG_THUNDERX_88XX=y CONFIG_IDENT_STRING=" for Cavium Thunder CN88XX ARM v8 Multi-Core" CONFIG_DEBUG_UART=y CONFIG_NR_DRAM_BANKS=1

On 22.02.19 19:02, Tim Harvey wrote:
Signed-off-by: Tim Harvey tharvey@gateworks.com
arch/arm/Kconfig | 6 +++--- arch/arm/Makefile | 1 + arch/arm/dts/Makefile | 2 +- .../arm/include/asm/arch-thunderx}/atf.h | 2 +- .../arm/include/asm/arch-thunderx}/atf_part.h | 0 .../arm/include/asm/arch-thunderx}/thunderx_svc.h | 0 arch/arm/include/asm/gpio.h | 2 +- arch/arm/mach-thunderx/Makefile | 2 ++ .../thunderx => arch/arm/mach-thunderx}/atf.c | 6 +++--- board/cavium/thunderx/Kconfig | 15 ++++++++++++--- board/cavium/thunderx/Makefile | 2 +- board/cavium/thunderx/thunderx.c | 2 +- configs/thunderx_88xx_defconfig | 3 ++- 13 files changed, 28 insertions(+), 15 deletions(-) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf.h (96%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf_part.h (100%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/thunderx_svc.h (100%) create mode 100644 arch/arm/mach-thunderx/Makefile rename {board/cavium/thunderx => arch/arm/mach-thunderx}/atf.c (98%)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1f3fa1575a..9f6f5a41da 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -393,7 +393,7 @@ config SPL_USE_ARCH_MEMSET
config ARM64_SUPPORT_AARCH32 bool "ARM64 system support AArch32 execution state"
- default y if ARM64 && !TARGET_THUNDERX_88XX
- default y if ARM64 && !ARCH_THUNDERX help This ARM64 system supports AArch32 execution state.
@@ -1359,8 +1359,8 @@ config ARCH_ROCKCHIP imply TPL_SYSRESET imply USB_FUNCTION_FASTBOOT
-config TARGET_THUNDERX_88XX
- bool "Support ThunderX 88xx"
+config ARCH_THUNDERX
- bool "Support ThunderX" select ARM64 select OF_CONTROL select PL01X_SERIAL
diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 4b6c5e1935..f58e2cd29c 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -79,6 +79,7 @@ machine-$(CONFIG_STM32) += stm32 machine-$(CONFIG_ARCH_STM32MP) += stm32mp machine-$(CONFIG_TEGRA) += tegra machine-$(CONFIG_ARCH_UNIPHIER) += uniphier +machine-$(CONFIG_ARCH_THUNDERX) += thunderx machine-$(CONFIG_ARCH_ZYNQ) += zynq machine-$(CONFIG_ARCH_VERSAL) += versal machine-$(CONFIG_ARCH_ZYNQMP_R5) += zynqmp-r5 diff --git a/arch/arm/dts/Makefile b/arch/arm/dts/Makefile index d36447d18d..87ccd96596 100644 --- a/arch/arm/dts/Makefile +++ b/arch/arm/dts/Makefile @@ -192,7 +192,7 @@ dtb-$(CONFIG_AM43XX) += am437x-gp-evm.dtb am437x-sk-evm.dtb \ am437x-idk-evm.dtb \ am4372-generic.dtb dtb-$(CONFIG_TI816X) += dm8168-evm.dtb -dtb-$(CONFIG_THUNDERX) += thunderx-88xx.dtb +dtb-$(CONFIG_THUNDERX_88XX) += thunderx-88xx.dtb
dtb-$(CONFIG_ARCH_SOCFPGA) += \ socfpga_arria5_socdk.dtb \ diff --git a/include/cavium/atf.h b/arch/arm/include/asm/arch-thunderx/atf.h similarity index 96% rename from include/cavium/atf.h rename to arch/arm/include/asm/arch-thunderx/atf.h index 3cf05c43d7..cda42d6140 100644 --- a/include/cavium/atf.h +++ b/arch/arm/include/asm/arch-thunderx/atf.h @@ -4,7 +4,7 @@ **/ #ifndef __ATF_H__ #define __ATF_H__ -#include <cavium/atf_part.h> +#include "atf_part.h"
ssize_t atf_read_mmc(uintptr_t offset, void *buffer, size_t size); ssize_t atf_read_nor(uintptr_t offset, void *buffer, size_t size); diff --git a/include/cavium/atf_part.h b/arch/arm/include/asm/arch-thunderx/atf_part.h similarity index 100% rename from include/cavium/atf_part.h rename to arch/arm/include/asm/arch-thunderx/atf_part.h diff --git a/include/cavium/thunderx_svc.h b/arch/arm/include/asm/arch-thunderx/thunderx_svc.h similarity index 100% rename from include/cavium/thunderx_svc.h rename to arch/arm/include/asm/arch-thunderx/thunderx_svc.h diff --git a/arch/arm/include/asm/gpio.h b/arch/arm/include/asm/gpio.h index 3039e66bf9..420f253c1b 100644 --- a/arch/arm/include/asm/gpio.h +++ b/arch/arm/include/asm/gpio.h @@ -1,5 +1,5 @@ #if !defined(CONFIG_ARCH_UNIPHIER) && !defined(CONFIG_ARCH_STI) && \
- !defined(CONFIG_ARCH_K3)
- !defined(CONFIG_ARCH_K3) && !defined(CONFIG_ARCH_THUNDERX)
This seems to be an unrelated change?
Alex

On 22.02.19 19:02, Tim Harvey wrote:
Signed-off-by: Tim Harvey tharvey@gateworks.com
arch/arm/Kconfig | 6 +++--- arch/arm/Makefile | 1 + arch/arm/dts/Makefile | 2 +- .../arm/include/asm/arch-thunderx}/atf.h | 2 +- .../arm/include/asm/arch-thunderx}/atf_part.h | 0 .../arm/include/asm/arch-thunderx}/thunderx_svc.h | 0 arch/arm/include/asm/gpio.h | 2 +- arch/arm/mach-thunderx/Makefile | 2 ++ .../thunderx => arch/arm/mach-thunderx}/atf.c | 6 +++--- board/cavium/thunderx/Kconfig | 15 ++++++++++++--- board/cavium/thunderx/Makefile | 2 +- board/cavium/thunderx/thunderx.c | 2 +-
Is the official brand "Cavium" still alive? Should this be Marvell instead?
Zi? Maen?
configs/thunderx_88xx_defconfig | 3 ++- 13 files changed, 28 insertions(+), 15 deletions(-) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf.h (96%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/atf_part.h (100%) rename {include/cavium => arch/arm/include/asm/arch-thunderx}/thunderx_svc.h (100%) create mode 100644 arch/arm/mach-thunderx/Makefile rename {board/cavium/thunderx => arch/arm/mach-thunderx}/atf.c (98%)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1f3fa1575a..9f6f5a41da 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -393,7 +393,7 @@ config SPL_USE_ARCH_MEMSET
config ARM64_SUPPORT_AARCH32 bool "ARM64 system support AArch32 execution state"
- default y if ARM64 && !TARGET_THUNDERX_88XX
- default y if ARM64 && !ARCH_THUNDERX help This ARM64 system supports AArch32 execution state.
@@ -1359,8 +1359,8 @@ config ARCH_ROCKCHIP imply TPL_SYSRESET imply USB_FUNCTION_FASTBOOT
-config TARGET_THUNDERX_88XX
- bool "Support ThunderX 88xx"
+config ARCH_THUNDERX
- bool "Support ThunderX"
This probably should be more explicit. I assume that ThunderX2 would have completely different properties, so better say "Support Marvell ThunderX 80xx/81xx/88xx series"?
Alex

The thunderx-81xx is a device-tree implementation supporting the Cavium Octeon-TX CN80xx/CN81xx SoC which combines 64bit ARM cores with thunderx peripherals.
Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/dts/Makefile | 1 + arch/arm/dts/thunderx-81xx.dts | 36 +++ arch/arm/dts/thunderx-81xx.dtsi | 440 ++++++++++++++++++++++++++++++++ board/cavium/thunderx/Kconfig | 4 + configs/thunderx_81xx_defconfig | 29 +++ include/configs/thunderx_81xx.h | 71 ++++++ 6 files changed, 581 insertions(+) create mode 100644 arch/arm/dts/thunderx-81xx.dts create mode 100644 arch/arm/dts/thunderx-81xx.dtsi create mode 100644 configs/thunderx_81xx_defconfig create mode 100644 include/configs/thunderx_81xx.h
diff --git a/arch/arm/dts/Makefile b/arch/arm/dts/Makefile index 87ccd96596..64e1ee77d0 100644 --- a/arch/arm/dts/Makefile +++ b/arch/arm/dts/Makefile @@ -193,6 +193,7 @@ dtb-$(CONFIG_AM43XX) += am437x-gp-evm.dtb am437x-sk-evm.dtb \ am4372-generic.dtb dtb-$(CONFIG_TI816X) += dm8168-evm.dtb dtb-$(CONFIG_THUNDERX_88XX) += thunderx-88xx.dtb +dtb-$(CONFIG_THUNDERX_81XX) += thunderx-81xx.dtb
dtb-$(CONFIG_ARCH_SOCFPGA) += \ socfpga_arria5_socdk.dtb \ diff --git a/arch/arm/dts/thunderx-81xx.dts b/arch/arm/dts/thunderx-81xx.dts new file mode 100644 index 0000000000..d065a36f96 --- /dev/null +++ b/arch/arm/dts/thunderx-81xx.dts @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0+ OR X11 +/* + * Cavium Thunder DTS file - Thunder board description + * + * Copyright (C) 2018, Cavium Inc. + * + */ + +/dts-v1/; + +/include/ "thunderx-81xx.dtsi" + +/ { + model = "Cavium ThunderX CN81XX board"; + compatible = "cavium,thunder-81xx"; + + aliases { + serial0 = &uaa0; + serial1 = &uaa1; + serial2 = &uaa2; + serial3 = &uaa3; + i2c0 = &i2c_9_0; + i2c1 = &i2c_9_1; + spi0 = &spi_7_0; + }; + + chosen { + stdout-path = "serial0:115200n8"; + }; + + memory@0 { + device_type = "memory"; + reg = <0x0 0x01400000 0x0 0x7EC00000>; + numa-node-id = <0>; + }; +}; diff --git a/arch/arm/dts/thunderx-81xx.dtsi b/arch/arm/dts/thunderx-81xx.dtsi new file mode 100644 index 0000000000..8f1432cb03 --- /dev/null +++ b/arch/arm/dts/thunderx-81xx.dtsi @@ -0,0 +1,440 @@ +// SPDX-License-Identifier: GPL-2.0+ OR X11 +/* + * Cavium Thunder DTS file - Thunder SoC description + * + * Copyright (C) 2018, Cavium Inc. + * + */ + +/ { + compatible = "cavium,thunder-81xx"; + interrupt-parent = <&gic0>; + #address-cells = <2>; + #size-cells = <2>; + + psci { + compatible = "arm,psci-0.2"; + method = "smc"; + }; + + cpus { + #address-cells = <2>; + #size-cells = <0>; + + cpu-map { + cluster0 { + core0 { + cpu = <&CPU0>; + }; + core1 { + cpu = <&CPU1>; + }; + core2 { + cpu = <&CPU2>; + }; + core3 { + cpu = <&CPU3>; + }; + }; + }; + CPU0: cpu@0 { + device_type = "cpu"; + compatible = "cavium,thunder", "arm,armv8"; + reg = <0x0 0x000>; + enable-method = "psci"; + /* socket 0 */ + numa-node-id = <0>; + next-level-cache = <&thunderx_L2_0>; + }; + CPU1: cpu@1 { + device_type = "cpu"; + compatible = "cavium,thunder", "arm,armv8"; + reg = <0x0 0x001>; + enable-method = "psci"; + numa-node-id = <0>; + next-level-cache = <&thunderx_L2_0>; + }; + CPU2: cpu@2 { + device_type = "cpu"; + compatible = "cavium,thunder", "arm,armv8"; + reg = <0x0 0x002>; + enable-method = "psci"; + numa-node-id = <0>; + next-level-cache = <&thunderx_L2_0>; + }; + CPU3: cpu@3 { + device_type = "cpu"; + compatible = "cavium,thunder", "arm,armv8"; + reg = <0x0 0x003>; + enable-method = "psci"; + numa-node-id = <0>; + next-level-cache = <&thunderx_L2_0>; + }; + }; + + thunderx_L2_0: l2-cache0 { + compatible = "cache"; + numa-node-id = <0>; + }; + + timer { + compatible = "arm,armv8-timer"; + interrupts = <1 13 4>, + <1 14 4>, + <1 11 4>, + <1 10 4>; + }; + + pmu { + compatible = "cavium,thunder-pmu", "arm,armv8-pmuv3"; + interrupts = <1 7 4>; + }; + + mmc_supply_3v3: mmc_supply_3v3 { + compatible = "regulator-fixed"; + regulator-name = "mmc_supply_3v3"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + + gpio = <&gpio_6_0 8 0>; + enable-active-high; + }; + + gic0: interrupt-controller@801000000000 { + compatible = "arm,gic-v3"; + #interrupt-cells = <3>; + #address-cells = <2>; + #size-cells = <2>; + #redistributor-regions = <1>; + ranges; + interrupt-controller; + reg = <0x8010 0x00000000 0x0 0x010000>, /* GICD */ + <0x8010 0x80000000 0x0 0x600000>; /* GICR */ + interrupts = <1 9 4>; + + its: gic-its@801000020000 { + compatible = "arm,gic-v3-its"; + reg = <0x8010 0x20000 0x0 0x200000>; + msi-controller; + numa-node-id = <0>; + }; + }; + + soc@0 { + compatible = "simple-bus"; + #address-cells = <2>; + #size-cells = <2>; + ranges; + numa-node-id = <0>; + + refclkuaa: refclkuaa { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <116640000>; + clock-output-names = "refclkuaa"; + }; + + sclk: sclk { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <800000000>; + clock-output-names = "sclk"; + }; + + uaa0: serial@87e028000000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0x87e0 0x28000000 0x0 0x1000>; + interrupts = <0 5 4>; + clocks = <&refclkuaa>; + clock-names = "apb_pclk"; + skip-init; + }; + + uaa1: serial@87e029000000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0x87e0 0x29000000 0x0 0x1000>; + interrupts = <0 6 4>; + clocks = <&refclkuaa>; + clock-names = "apb_pclk"; + skip-init; + }; + + uaa2: serial@87e02a000000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0x87e0 0x2a000000 0x0 0x1000>; + interrupts = <0 7 4>; + clocks = <&refclkuaa>; + clock-names = "apb_pclk"; + skip-init; + }; + + uaa3: serial@87e02b000000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0x87e0 0x2b000000 0x0 0x1000>; + interrupts = <0 8 4>; + clocks = <&refclkuaa>; + clock-names = "apb_pclk"; + skip-init; + }; + + watch-dog@8440000a0000 { + compatible = "arm,sbsa-gwdt"; + reg = <0x8440 0xa0000 0x0 0x1000>, <0x8440 0xb0000 0x0 0x1000>; + interrupts = <0 9 4>; + }; + + pbus0: nor@0 { + compatible = "cfi-flash"; + reg = <0x8000 0x0 0x0 0x800000>; + device-width = <1>; + bank-width = <1>; + clocks = <&sclk>; + }; + + smmu0@830000000000 { + compatible = "cavium,smmu-v2"; + reg = <0x8300 0x0 0x0 0x2000000>; + #global-interrupts = <1>; + interrupts = <0 68 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, + <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>, <0 69 4>; + + mmu-masters = <&ecam0 0x100>, + <&pem0 0x200>, + <&pem1 0x300>, + <&pem2 0x400>; + + }; + + ecam0: pci@848000000000 { + compatible = "pci-host-ecam-generic"; + device_type = "pci"; + msi-parent = <&its>; + msi-map = <0 &its 0 0x10000>; + bus-range = <0 31>; + #size-cells = <2>; + #address-cells = <3>; + #stream-id-cells = <1>; + u-boot,dm-pre-reloc; + dma-coherent; + reg = <0x8480 0x00000000 0 0x02000000>; /* Configuration space */ + ranges = <0x03000000 0x8010 0x00000000 0x8010 0x00000000 0x080 0x00000000>, /* mem ranges */ + <0x03000000 0x8100 0x00000000 0x8100 0x00000000 0x80 0x00000000>, /* SATA */ + <0x03000000 0x8680 0x00000000 0x8680 0x00000000 0x160 0x28000000>, /* UARTs */ + <0x03000000 0x87e0 0x2c000000 0x87e0 0x2c000000 0x000 0x94000000>, /* PEMs */ + <0x03000000 0x8400 0x00000000 0x8400 0x00000000 0x010 0x00000000>, /* RNM */ + <0x03000000 0x8430 0x00000000 0x8430 0x00000000 0x02 0x00000000>, /* NIC0*/ + <0x03000000 0x87e0 0xc6000000 0x87e0 0xc6000000 0x01f 0x3a000000>; + + mrml_bridge: mrml-bridge0@1,0 { + compatible = "pci-bridge", "cavium,thunder-8890-mrml-bridge"; + #size-cells = <2>; + #address-cells = <3>; + ranges = <0x03000000 0x87e0 0x00000000 0x03000000 0x87e0 0x00000000 0x10 0x00000000>; + reg = <0x0800 0 0 0 0>; /* DEVFN = 0x08 (1:0) */ + device_type = "pci"; + u-boot,dm-pre-reloc; + + mdio-nexus@1,3 { + compatible = "cavium,thunder-8890-mdio-nexus"; + #address-cells = <2>; + #size-cells = <2>; + reg = <0x0b00 0 0 0 0>; /* DEVFN = 0x0b (1:3) */ + assigned-addresses = <0x03000000 0x87e0 0x05000000 0x0 0x800000>; + ranges = <0x87e0 0x05000000 0x03000000 0x87e0 0x05000000 0x0 0x800000>; + mdio0@87e005003800 { + compatible = "cavium,thunder-8890-mdio"; + #address-cells = <1>; + #size-cells = <0>; + reg = <0x87e0 0x05003800 0x0 0x30>; + }; + mdio1@87e005003880 { + compatible = "cavium,thunder-8890-mdio"; + #address-cells = <1>; + #size-cells = <0>; + reg = <0x87e0 0x05003880 0x0 0x30>; + }; + }; + + mmc_1_4: mmc@1,4 { + compatible = "cavium,thunder-8890-mmc"; + reg = <0x0c00 0 0 0 0>; /* DEVFN = 0x0c (1:4) */ + #address-cells = <1>; + #size-cells = <0>; + clocks = <&sclk>; + }; + + i2c_9_0: i2c@9,0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8890-twsi"; + reg = <0x4800 0 0 0 0>; /* DEVFN = 0x48 (9:0) */ + clock-frequency = <100000>; + clocks = <&sclk>; + u-boot,dm-pre-reloc; + }; + + i2c_9_1: i2c@9,1 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8890-twsi"; + reg = <0x4900 0 0 0 0>; /* DEVFN = 0x49 (9:1) */ + clock-frequency = <100000>; + clocks = <&sclk>; + u-boot,dm-pre-reloc; + }; + + rgx0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8890-bgx"; + reg = <0x9000 0 0 0 0>; /* DEVFN = 0x90 (16:1) */ + }; + bgx0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8890-bgx"; + reg = <0x8000 0 0 0 0>; /* DEVFN = 0x80 (16:0) */ + }; + bgx1 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8890-bgx"; + reg = <0x8100 0 0 0 0>; /* DEVFN = 0x81 (16:1) */ + }; + }; + + spi_7_0: spi@7,0 { + compatible = "cavium,thunder-8190-spi"; + reg = <0x3800 0x0 0x0 0x0 0x0>; /* DEVFN = 0x38 (7:0) */ + #address-cells = <1>; + #size-cells = <0>; + clocks = <&sclk>; + }; + + gpio_6_0: gpio0@6,0 { + #gpio-cells = <2>; + compatible = "cavium,thunder-8890-gpio"; + gpio-controller; + interrupt-controller; + #interrupt-cells = <2>; + reg = <0x3000 0 0 0 0>; /* DEVFN = 0x30 (6:0) */ + u-boot,dm-pre-reloc; + }; + + nfc: nand@b,0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,cn8130-nand"; + reg = <0x5800 0 0 0 0>; /* DEVFN = 0x58 (b:0) */ + clocks = <&sclk>; + }; + }; + + pem0: pci@87e0c0000000 { + + /* "cavium,pci-host-thunder-pem" implies that + the first bus in bus-range has config access + via the "PEM space", subsequent buses have + config assess via the "Configuration space". + The "mem64 PEM" range is used to map the PEM + BAR0, which is used by the AER and PME MSI-X + sources. UEFI and Linux must assign the same + bus number to each device, otherwise Linux + enumeration gets confused. Because UEFI + skips the PEM bus and its PCIe-RC bridge it + uses a numbering that starts 1 bus higher. + */ + + compatible = "cavium,pci-host-thunder-pem"; + device_type = "pci"; + msi-parent = <&its>; + msi-map = <0 &its 0 0x10000>; + bus-range = <0x1f 0x57>; + #size-cells = <2>; + #address-cells = <3>; + #stream-id-cells = <1>; + dma-coherent; + reg = <0x8800 0x1f000000 0x0 0x39000000>, /* Configuration space */ + <0x87e0 0xc0000000 0x0 0x01000000>; /* PEM space */ + ranges = <0x01000000 0x00 0x00000000 0x8830 0x00000000 0x00 0x00010000>, /* I/O */ + <0x03000000 0x00 0x10000000 0x8810 0x10000000 0x0f 0xf0000000>, /* mem64 */ + <0x43000000 0x10 0x00000000 0x8820 0x00000000 0x10 0x00000000>, /* mem64-pref */ + <0x03000000 0x87e0 0xc0000000 0x87e0 0xc0000000 0x00 0x01000000>; /* mem64 PEM */ + + #interrupt-cells = <1>; + interrupt-map-mask = <0 0 0 7>; + interrupt-map = <0 0 0 1 &gic0 0 0 0 16 4>, /* INTA */ + <0 0 0 2 &gic0 0 0 0 17 4>, /* INTB */ + <0 0 0 3 &gic0 0 0 0 18 4>, /* INTC */ + <0 0 0 4 &gic0 0 0 0 19 4>; /* INTD */ + }; + + pem1: pci@87e0c1000000 { + compatible = "cavium,pci-host-thunder-pem"; + device_type = "pci"; + msi-parent = <&its>; + msi-map = <0 &its 0 0x10000>; + bus-range = <0x57 0x8f>; + #size-cells = <2>; + #address-cells = <3>; + #stream-id-cells = <1>; + dma-coherent; + reg = <0x8840 0x57000000 0x0 0x39000000>, /* Configuration space */ + <0x87e0 0xc1000000 0x0 0x01000000>; /* PEM space */ + ranges = <0x01000000 0x00 0x00010000 0x8870 0x00010000 0x00 0x00010000>, /* I/O */ + <0x03000000 0x00 0x10000000 0x8850 0x10000000 0x0f 0xf0000000>, /* mem64 */ + <0x43000000 0x10 0x00000000 0x8860 0x00000000 0x10 0x00000000>, /* mem64-pref */ + <0x03000000 0x87e0 0xc1000000 0x87e0 0xc1000000 0x00 0x01000000>; /* mem64 PEM */ + + #interrupt-cells = <1>; + interrupt-map-mask = <0 0 0 7>; + interrupt-map = <0 0 0 1 &gic0 0 0 0 20 4>, /* INTA */ + <0 0 0 2 &gic0 0 0 0 21 4>, /* INTB */ + <0 0 0 3 &gic0 0 0 0 22 4>, /* INTC */ + <0 0 0 4 &gic0 0 0 0 23 4>; /* INTD */ + }; + + pem2: pci@87e0c2000000 { + compatible = "cavium,pci-host-thunder-pem"; + device_type = "pci"; + msi-parent = <&its>; + msi-map = <0 &its 0 0x10000>; + bus-range = <0x8f 0xc7>; + #size-cells = <2>; + #address-cells = <3>; + #stream-id-cells = <1>; + dma-coherent; + reg = <0x8880 0x8f000000 0x0 0x39000000>, /* Configuration space */ + <0x87e0 0xc2000000 0x0 0x01000000>; /* PEM space */ + ranges = <0x01000000 0x00 0x00020000 0x88b0 0x00020000 0x00 0x00010000>, /* I/O */ + <0x03000000 0x00 0x10000000 0x8890 0x10000000 0x0f 0xf0000000>, /* mem64 */ + <0x43000000 0x10 0x00000000 0x88a0 0x00000000 0x10 0x00000000>, /* mem64-pref */ + <0x03000000 0x87e0 0xc2000000 0x87e0 0xc2000000 0x00 0x01000000>; /* mem64 PEM */ + + #interrupt-cells = <1>; + interrupt-map-mask = <0 0 0 7>; + interrupt-map = <0 0 0 1 &gic0 0 0 0 24 4>, /* INTA */ + <0 0 0 2 &gic0 0 0 0 25 4>, /* INTB */ + <0 0 0 3 &gic0 0 0 0 26 4>, /* INTC */ + <0 0 0 4 &gic0 0 0 0 27 4>; /* INTD */ + }; + + tdm: tdm@d,0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "cavium,thunder-8190-tdm"; + reg = <0x6800 0 0 0>; /* DEVFN = 0x68 (d:0) */ + clocks = <&sclk>; + }; + }; + +}; diff --git a/board/cavium/thunderx/Kconfig b/board/cavium/thunderx/Kconfig index 5fa367ac35..16df1a9fc2 100644 --- a/board/cavium/thunderx/Kconfig +++ b/board/cavium/thunderx/Kconfig @@ -19,11 +19,15 @@ choice config THUNDERX_88XX bool "ThunderX 88xx family"
+config THUNDERX_81XX + bool "ThunderX 81xx family" + endchoice
config SYS_CONFIG_NAME string default "thunderx_88xx" if THUNDERX_88XX + default "thunderx_81xx" if THUNDERX_81XX
config CMD_ATF bool "Enable ATF query commands" diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig new file mode 100644 index 0000000000..4f6b4ad18c --- /dev/null +++ b/configs/thunderx_81xx_defconfig @@ -0,0 +1,29 @@ +CONFIG_ARM=y +CONFIG_ARCH_THUNDERX=y +CONFIG_SYS_TEXT_BASE=0x00500000 +CONFIG_DEBUG_UART_BASE=0x87e028000000 +CONFIG_DEBUG_UART_CLOCK=24000000 +CONFIG_IDENT_STRING=" for Cavium Thunder CN81XX ARM v8 Multi-Core" +CONFIG_THUNDERX_81XX=y +CONFIG_DEBUG_UART=y +CONFIG_NR_DRAM_BANKS=1 +CONFIG_BOOTDELAY=5 +CONFIG_USE_BOOTARGS=y +CONFIG_BOOTARGS="console=ttyAMA0,115200n8 earlycon=pl011,0x87e028000000 debug maxcpus=4 rootwait rw root=/dev/mmcblk0p2 coherent_pool=16M" +# CONFIG_DISPLAY_CPUINFO is not set +# CONFIG_DISPLAY_BOARDINFO is not set +CONFIG_HUSH_PARSER=y +CONFIG_SYS_PROMPT="ThunderX_81XX> " +# CONFIG_CMD_EXPORTENV is not set +# CONFIG_CMD_IMPORTENV is not set +# CONFIG_CMD_EDITENV is not set +# CONFIG_CMD_SAVEENV is not set +# CONFIG_CMD_ENV_EXISTS is not set +# CONFIG_CMD_FLASH is not set +# CONFIG_CMD_NET is not set +CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" +CONFIG_DM=y +# CONFIG_MMC is not set +CONFIG_DM_SERIAL=y +CONFIG_DEBUG_UART_PL011=y +CONFIG_DEBUG_UART_SKIP_INIT=y diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h new file mode 100644 index 0000000000..10c4a89232 --- /dev/null +++ b/include/configs/thunderx_81xx.h @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#ifndef __THUNDERX_81XX_H__ +#define __THUNDERX_81XX_H__ + +#include <linux/sizes.h> + +#define CONFIG_THUNDERX + +#define CONFIG_SYS_64BIT + +#define MEM_BASE 0x00500000 + +#define CONFIG_SYS_LOWMEM_BASE MEM_BASE + +/* Link Definitions */ +#define CONFIG_SYS_INIT_SP_ADDR (CONFIG_SYS_SDRAM_BASE + 0xffff0) + +/* SMP Spin Table Definitions */ +#define CPU_RELEASE_ADDR (CONFIG_SYS_SDRAM_BASE + 0x7fff0) + +/* Generic Timer Definitions */ +#define COUNTER_FREQUENCY (0x1800000) /* 24MHz */ + +#define CONFIG_SYS_MEMTEST_START MEM_BASE +#define CONFIG_SYS_MEMTEST_END (MEM_BASE + PHYS_SDRAM_1_SIZE) + +/* Size of malloc() pool */ +#define CONFIG_SYS_MALLOC_LEN (CONFIG_ENV_SIZE + 1 * SZ_1M) + +/* Generic Interrupt Controller Definitions */ +#define GICD_BASE (0x801000000000) +#define GICR_BASE (0x801000002000) +#define CONFIG_SYS_SERIAL0 0x87e024000000 +#define CONFIG_SYS_SERIAL1 0x87e025000000 + +/* Miscellaneous configurable options */ +#define CONFIG_SYS_LOAD_ADDR (MEM_BASE) + +/* Physical Memory Map */ +#define PHYS_SDRAM_1 (MEM_BASE) /* SDRAM Bank #1 */ +#define PHYS_SDRAM_1_SIZE (0x80000000-MEM_BASE) /* 2048 MB */ +#define CONFIG_SYS_SDRAM_BASE PHYS_SDRAM_1 + +#define CONFIG_EXTRA_ENV_SETTINGS \ + "loadaddr=0x02000000\0" \ + "kernel_addr_r=0x02000000\0" \ + "pxefile_addr_r=0x02000000\0" \ + "fdt_addr_r=0x03000000\0" \ + "scriptaddr=0x03100000\0" \ + "pxe_addr_r=0x03200000\0" \ + "ramdisk_addr_r=0x03300000\0" \ + BOOTENV + +#define BOOT_TARGET_DEVICES(func) + +#include <config_distro_bootcmd.h> + +#define CONFIG_ENV_SIZE 0x1000 + +/* Monitor Command Prompt */ +#define CONFIG_SYS_CBSIZE 512 /* Console I/O Buffer Size */ +#define CONFIG_SYS_MAXARGS 64 /* max command args */ +#define CONFIG_NO_RELOCATION 1 +#define PLL_REF_CLK 50000000 /* 50 MHz */ +#define NS_PER_REF_CLK_TICK (1000000000/PLL_REF_CLK) + +#endif /* __THUNDERX_81XX_H__ */

On 22.02.19 19:02, Tim Harvey wrote:
The thunderx-81xx is a device-tree implementation supporting the Cavium Octeon-TX CN80xx/CN81xx SoC which combines 64bit ARM cores with thunderx peripherals.
Signed-off-by: Tim Harvey tharvey@gateworks.com
arch/arm/dts/Makefile | 1 + arch/arm/dts/thunderx-81xx.dts | 36 +++ arch/arm/dts/thunderx-81xx.dtsi | 440 ++++++++++++++++++++++++++++++++
Are those a 1:1 copy from Linux? If so, it would be good to indicate so. If not, we may want to use the Linux provided ones instead.
board/cavium/thunderx/Kconfig | 4 + configs/thunderx_81xx_defconfig | 29 +++ include/configs/thunderx_81xx.h | 71 ++++++ 6 files changed, 581 insertions(+) create mode 100644 arch/arm/dts/thunderx-81xx.dts create mode 100644 arch/arm/dts/thunderx-81xx.dtsi create mode 100644 configs/thunderx_81xx_defconfig create mode 100644 include/configs/thunderx_81xx.h
diff --git a/arch/arm/dts/Makefile b/arch/arm/dts/Makefile index 87ccd96596..64e1ee77d0 100644 --- a/arch/arm/dts/Makefile +++ b/arch/arm/dts/Makefile @@ -193,6 +193,7 @@ dtb-$(CONFIG_AM43XX) += am437x-gp-evm.dtb am437x-sk-evm.dtb \ am4372-generic.dtb dtb-$(CONFIG_TI816X) += dm8168-evm.dtb dtb-$(CONFIG_THUNDERX_88XX) += thunderx-88xx.dtb +dtb-$(CONFIG_THUNDERX_81XX) += thunderx-81xx.dtb
dtb-$(CONFIG_ARCH_SOCFPGA) += \ socfpga_arria5_socdk.dtb \ diff --git a/arch/arm/dts/thunderx-81xx.dts b/arch/arm/dts/thunderx-81xx.dts new file mode 100644 index 0000000000..d065a36f96 --- /dev/null +++ b/arch/arm/dts/thunderx-81xx.dts @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0+ OR X11 +/*
- Cavium Thunder DTS file - Thunder board description
The board name is not terribly descriptive. What board is this for? A model? An evaluation system?
- Copyright (C) 2018, Cavium Inc.
- */
+/dts-v1/;
+/include/ "thunderx-81xx.dtsi"
+/ {
- model = "Cavium ThunderX CN81XX board";
- compatible = "cavium,thunder-81xx";
[...]
diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h new file mode 100644 index 0000000000..10c4a89232 --- /dev/null +++ b/include/configs/thunderx_81xx.h @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0+ +/*
- Copyright (C) 2018, Cavium Inc.
- */
+#ifndef __THUNDERX_81XX_H__ +#define __THUNDERX_81XX_H__
+#include <linux/sizes.h>
+#define CONFIG_THUNDERX
+#define CONFIG_SYS_64BIT
+#define MEM_BASE 0x00500000
Why? Are the reserved bits below? In that case, we only need to mark them as reserved, not modify the base, right?
+#define CONFIG_SYS_LOWMEM_BASE MEM_BASE
+/* Link Definitions */ +#define CONFIG_SYS_INIT_SP_ADDR (CONFIG_SYS_SDRAM_BASE + 0xffff0)
+/* SMP Spin Table Definitions */ +#define CPU_RELEASE_ADDR (CONFIG_SYS_SDRAM_BASE + 0x7fff0)
+/* Generic Timer Definitions */ +#define COUNTER_FREQUENCY (0x1800000) /* 24MHz */
+#define CONFIG_SYS_MEMTEST_START MEM_BASE +#define CONFIG_SYS_MEMTEST_END (MEM_BASE + PHYS_SDRAM_1_SIZE)
+/* Size of malloc() pool */ +#define CONFIG_SYS_MALLOC_LEN (CONFIG_ENV_SIZE + 1 * SZ_1M)
+/* Generic Interrupt Controller Definitions */ +#define GICD_BASE (0x801000000000) +#define GICR_BASE (0x801000002000) +#define CONFIG_SYS_SERIAL0 0x87e024000000 +#define CONFIG_SYS_SERIAL1 0x87e025000000
+/* Miscellaneous configurable options */ +#define CONFIG_SYS_LOAD_ADDR (MEM_BASE)
+/* Physical Memory Map */ +#define PHYS_SDRAM_1 (MEM_BASE) /* SDRAM Bank #1 */ +#define PHYS_SDRAM_1_SIZE (0x80000000-MEM_BASE) /* 2048 MB */ +#define CONFIG_SYS_SDRAM_BASE PHYS_SDRAM_1
In fact, are all of these actually used? I thought I got rid of most of the manual header based memory layout bits for aarch64 years ago :).
+#define CONFIG_EXTRA_ENV_SETTINGS \
- "loadaddr=0x02000000\0" \
- "kernel_addr_r=0x02000000\0" \
- "pxefile_addr_r=0x02000000\0" \
- "fdt_addr_r=0x03000000\0" \
- "scriptaddr=0x03100000\0" \
- "pxe_addr_r=0x03200000\0" \
- "ramdisk_addr_r=0x03300000\0" \
- BOOTENV
+#define BOOT_TARGET_DEVICES(func)
+#include <config_distro_bootcmd.h>
+#define CONFIG_ENV_SIZE 0x1000
That sounds quite small for a distro env.
Alex
+/* Monitor Command Prompt */ +#define CONFIG_SYS_CBSIZE 512 /* Console I/O Buffer Size */ +#define CONFIG_SYS_MAXARGS 64 /* max command args */ +#define CONFIG_NO_RELOCATION 1 +#define PLL_REF_CLK 50000000 /* 50 MHz */ +#define NS_PER_REF_CLK_TICK (1000000000/PLL_REF_CLK)
+#endif /* __THUNDERX_81XX_H__ */

The thunderx boards use the Cavium Bringup and Diagnostics Kit (BDK) as a secondary program loader (SPL). This initial boot firmware loads the device-tree and passes it to the next layer of software in X1.
Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/mach-thunderx/Makefile | 2 +- arch/arm/mach-thunderx/fdt.c | 50 ++++++++++++++++++++++++++ arch/arm/mach-thunderx/lowlevel_init.S | 31 ++++++++++++++++ board/cavium/thunderx/thunderx.c | 12 +++++-- 4 files changed, 92 insertions(+), 3 deletions(-) create mode 100644 arch/arm/mach-thunderx/fdt.c create mode 100644 arch/arm/mach-thunderx/lowlevel_init.S
diff --git a/arch/arm/mach-thunderx/Makefile b/arch/arm/mach-thunderx/Makefile index 34b6ecc2f9..fb457cb3e0 100644 --- a/arch/arm/mach-thunderx/Makefile +++ b/arch/arm/mach-thunderx/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0+ -obj-y := atf.o +obj-y := atf.o lowlevel_init.o fdt.o diff --git a/arch/arm/mach-thunderx/fdt.c b/arch/arm/mach-thunderx/fdt.c new file mode 100644 index 0000000000..31f1128e9f --- /dev/null +++ b/arch/arm/mach-thunderx/fdt.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0+ +/** + * Copyright (C) 2014, Cavium Inc. + */ + +#include <common.h> +#include <malloc.h> +#include <errno.h> +#include <environment.h> +#include <fdtdec.h> + +/* From lowlevel_init.S */ +extern unsigned long fdt_base_addr; + +/** + * If the firmware passed a device tree use it for U-Boot + * + * @return FDT base address received from ATF in x1 register + */ +void *board_fdt_blob_setup(void) +{ + if (fdt_magic(fdt_base_addr) != FDT_MAGIC) + return NULL; + return (void *)fdt_base_addr; +} + +int ft_board_setup(void *blob, bd_t *bd) +{ + int offset; + int ret = 0; + + debug("%s\n", __func__); + ret = fdt_check_header(blob); + if (ret < 0) { + printf("ERROR: %s\n", fdt_strerror(ret)); + return ret; + } + + /* remove "cavium, bdk" node from DT */ + if (blob) { + offset = fdt_path_offset(blob, "/cavium,bdk"); + if(offset >= 0) { + ret = fdt_del_node(blob, offset); + debug("%s deleted 'cavium,bdk' node\n", __func__); + } + } + + return ret; +} + diff --git a/arch/arm/mach-thunderx/lowlevel_init.S b/arch/arm/mach-thunderx/lowlevel_init.S new file mode 100644 index 0000000000..fb81ac4fd0 --- /dev/null +++ b/arch/arm/mach-thunderx/lowlevel_init.S @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#include <linux/linkage.h> +#include <asm/macro.h> + +.align 8 +.global fdt_base_addr +fdt_base_addr: + .dword 0x0 + +.global save_boot_params +save_boot_params: + /* Read FDT base from x1 register passed by ATF */ + adr x21, fdt_base_addr + str x1, [x21] + + /* Returns */ + b save_boot_params_ret + +ENTRY(lowlevel_init) + mov x29, lr /* Save LR */ + + /* any lowlevel init should go here */ + + mov lr, x29 /* Restore LR */ + ret +ENDPROC(lowlevel_init) + diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index 2b80dc56f1..57dce5aee0 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -5,8 +5,9 @@
#include <common.h> #include <dm.h> -#include <malloc.h> #include <errno.h> +#include <fdt_support.h> +#include <malloc.h> #include <linux/compiler.h>
#include <asm/arch-thunderx/atf.h> @@ -38,7 +39,10 @@ U_BOOT_DEVICE(thunderx_serial1) = { .name = "serial_pl01x", .platdata = &serial1, }; -#endif +#else +/* From lowlevel_init.S */ +extern unsigned long fdt_base_addr; +#endif // !CONFIG_IS_ENABLED(OF_CONTROL)
DECLARE_GLOBAL_DATA_PTR;
@@ -70,6 +74,10 @@ struct mm_region *mem_map = thunderx_mem_map;
int board_init(void) { +#if CONFIG_IS_ENABLED(OF_CONTROL) + ulong fdt_addr = (ulong)fdt_base_addr; + set_working_fdt_addr(fdt_addr); +#endif return 0; }

On 22.02.19 19:03, Tim Harvey wrote:
The thunderx boards use the Cavium Bringup and Diagnostics Kit (BDK) as a secondary program loader (SPL). This initial boot firmware loads the device-tree and passes it to the next layer of software in X1.
Signed-off-by: Tim Harvey tharvey@gateworks.com
In the long run we maybe want to standardize on CONFIG_OF_PRIOR_STAGE for aarch64 as well and then move rpi as well as this target over to it. But for now, I think this solution is very reasonable.
arch/arm/mach-thunderx/Makefile | 2 +- arch/arm/mach-thunderx/fdt.c | 50 ++++++++++++++++++++++++++ arch/arm/mach-thunderx/lowlevel_init.S | 31 ++++++++++++++++ board/cavium/thunderx/thunderx.c | 12 +++++-- 4 files changed, 92 insertions(+), 3 deletions(-) create mode 100644 arch/arm/mach-thunderx/fdt.c create mode 100644 arch/arm/mach-thunderx/lowlevel_init.S
diff --git a/arch/arm/mach-thunderx/Makefile b/arch/arm/mach-thunderx/Makefile index 34b6ecc2f9..fb457cb3e0 100644 --- a/arch/arm/mach-thunderx/Makefile +++ b/arch/arm/mach-thunderx/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0+ -obj-y := atf.o +obj-y := atf.o lowlevel_init.o fdt.o diff --git a/arch/arm/mach-thunderx/fdt.c b/arch/arm/mach-thunderx/fdt.c new file mode 100644 index 0000000000..31f1128e9f --- /dev/null +++ b/arch/arm/mach-thunderx/fdt.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0+ +/**
- Copyright (C) 2014, Cavium Inc.
- */
+#include <common.h> +#include <malloc.h> +#include <errno.h> +#include <environment.h> +#include <fdtdec.h>
+/* From lowlevel_init.S */ +extern unsigned long fdt_base_addr;
+/**
- If the firmware passed a device tree use it for U-Boot
- @return FDT base address received from ATF in x1 register
- */
+void *board_fdt_blob_setup(void) +{
if (fdt_magic(fdt_base_addr) != FDT_MAGIC)
return NULL;
- return (void *)fdt_base_addr;
+}
+int ft_board_setup(void *blob, bd_t *bd) +{
- int offset;
- int ret = 0;
- debug("%s\n", __func__);
- ret = fdt_check_header(blob);
- if (ret < 0) {
printf("ERROR: %s\n", fdt_strerror(ret));
return ret;
- }
- /* remove "cavium, bdk" node from DT */
- if (blob) {
offset = fdt_path_offset(blob, "/cavium,bdk");
if(offset >= 0) {
ret = fdt_del_node(blob, offset);
debug("%s deleted 'cavium,bdk' node\n", __func__);
Why remove it?
Alex

Add Cavium Thunderx common registers, structures, and helper functions
Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/include/asm/arch-thunderx/thunderx.h | 300 ++++++++++++++++++ arch/arm/mach-thunderx/Makefile | 2 +- arch/arm/mach-thunderx/misc.c | 33 ++ 3 files changed, 334 insertions(+), 1 deletion(-) create mode 100644 arch/arm/include/asm/arch-thunderx/thunderx.h create mode 100644 arch/arm/mach-thunderx/misc.c
diff --git a/arch/arm/include/asm/arch-thunderx/thunderx.h b/arch/arm/include/asm/arch-thunderx/thunderx.h new file mode 100644 index 0000000000..58f36c6cdc --- /dev/null +++ b/arch/arm/include/asm/arch-thunderx/thunderx.h @@ -0,0 +1,300 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ +#ifndef __THUNDERX_H__ +#define __THUNDERX_H__ + +/* Registers */ +#define CAVM_RST_BOOT 0x87e006001600ll +#define CAVM_MIO_FUS_DAT2 0x87e003001410ll +#define CAVM_XCVX_RESET 0x87e0db000000ll + +/* + * Flag bits in top byte. The top byte of MIDR_EL1 is defined + * as ox43, the Cavium implementer code. In this number, bits + * 7,5,4 are defiend as zero. We use these bits to signal + * that revision numbers should be ignored. It isn't ideal + * that these are in the middle of an already defined field, + * but this keeps the model numbers as 32 bits + */ +#define __OM_IGNORE_REVISION 0x80000000 +#define __OM_IGNORE_MINOR_REVISION 0x20000000 +#define __OM_IGNORE_MODEL 0x10000000 + +#define CAVIUM_CN88XX_PASS1_0 0x430f0a10 +#define CAVIUM_CN88XX_PASS1_1 0x430f0a11 +#define CAVIUM_CN88XX_PASS2_0 0x431f0a10 +#define CAVIUM_CN88XX_PASS2_1 0x431f0a11 +#define CAVIUM_CN88XX_PASS2_2 0x431f0a12 +#define CAVIUM_CN88XX (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN88XX_PASS1_X (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION) +#define CAVIUM_CN88XX_PASS2_X (CAVIUM_CN88XX_PASS2_0 | __OM_IGNORE_MINOR_REVISION) + +#define CAVIUM_CN83XX_PASS1_0 0x430f0a30 +#define CAVIUM_CN83XX (CAVIUM_CN83XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN83XX_PASS1_X (CAVIUM_CN83XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION) + +#define CAVIUM_CN81XX_PASS1_0 0x430f0a20 +#define CAVIUM_CN81XX (CAVIUM_CN81XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN81XX_PASS1_X (CAVIUM_CN81XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION) + +#define CAVIUM_CN98XX_PASS1_0 0x430f0b10 +#define CAVIUM_CN98XX (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN98XX_PASS1_X (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION) + +/* These match entire families of chips */ +#define CAVIUM_CN8XXX (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_MODEL) +#define CAVIUM_CN9XXX (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_MODEL) + +static inline uint64_t cavium_get_model(void) +{ + uint64_t result; + + __asm ("mrs %[rd],MIDR_EL1" : [rd] "=r" (result)); + + return result; +} + +/** + * Return non-zero if the chip matched the passed model. + * + * @param arg_model One of the CAVIUM_* constants for chip models and passes + * @return Non-zero if match + */ +static inline int CAVIUM_IS_MODEL(uint32_t arg_model) +{ + const uint32_t FAMILY = 0xff00; /* Bits 15:8 */ + const uint32_t PARTNUM = 0xfff0; /* Bits 15:4 */ + const uint32_t VARIANT = 0xf00000; /* Bits 23:20 */ + const uint32_t REVISION = 0xf; /* Bits 3:0 */ + + uint32_t my_model = cavium_get_model(); + uint32_t mask; + + if (arg_model & __OM_IGNORE_MODEL) + mask = FAMILY; + else if (arg_model & __OM_IGNORE_REVISION) + mask = PARTNUM; + else if (arg_model & __OM_IGNORE_MINOR_REVISION) + mask = PARTNUM | VARIANT; + else + mask = PARTNUM | VARIANT | REVISION; + return ((arg_model & mask) == (my_model & mask)); +} + +/** + * Register (RSL) rst_boot + * + * RST Boot Register + */ +union cavm_rst_boot +{ + u64 u; + struct cavm_rst_boot_s { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + u64 chipkill:1; + u64 jtcsrdis:1; + u64 ejtagdis:1; + u64 trusted_mode:1; + u64 ckill_ppdis:1; + u64 jt_tstmode:1; + u64 vrm_err:1; + u64 dis_huk:1; + u64 dis_scan:1; + u64 reserved_47_54:8; + u64 c_mul:7; + u64 reserved_39:1; + u64 pnr_mul:6; + u64 lboot_oci:3; + u64 lboot_pf_flr:4; + u64 lboot_ckill:1; + u64 lboot_jtg:1; + u64 lboot_ext45:6; + u64 lboot_ext23:6; + u64 lboot:10; + u64 rboot:1; + u64 rboot_pin:1; +#else /* Word 0 - Little Endian */ + u64 rboot_pin:1; + u64 rboot:1; + u64 lboot:10; + u64 lboot_ext23:6; + u64 lboot_ext45:6; + u64 lboot_jtg:1; + u64 lboot_ckill:1; + u64 lboot_pf_flr:4; + u64 lboot_oci:3; + u64 pnr_mul:6; + u64 reserved_39:1; + u64 c_mul:7; + u64 reserved_47_54:8; + u64 dis_scan:1; + u64 dis_huk:1; + u64 vrm_err:1; + u64 jt_tstmode:1; + u64 ckill_ppdis:1; + u64 trusted_mode:1; + u64 ejtagdis:1; + u64 jtcsrdis:1; + u64 chipkill:1; +#endif /* Word 0 - End */ + } s; + struct cavm_rst_boot_cn81xx { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + u64 chipkill:1; + u64 jtcsrdis:1; + u64 ejtagdis:1; + u64 trusted_mode:1; + u64 ckill_ppdis:1; + u64 jt_tstmode:1; + u64 vrm_err:1; + u64 dis_huk:1; + u64 dis_scan:1; + u64 reserved_47_54:8; + u64 c_mul:7; + u64 reserved_39:1; + u64 pnr_mul:6; + u64 lboot_oci:3; + u64 reserved_26_29:4; + u64 lboot_ckill:1; + u64 lboot_jtg:1; + u64 lboot_ext45:6; + u64 lboot_ext23:6; + u64 lboot:10; + u64 rboot:1; + u64 rboot_pin:1; +#else /* Word 0 - Little Endian */ + u64 rboot_pin:1; + u64 rboot:1; + u64 lboot:10; + u64 lboot_ext23:6; + u64 lboot_ext45:6; + u64 lboot_jtg:1; + u64 lboot_ckill:1; + u64 reserved_26_29:4; + u64 lboot_oci:3; + u64 pnr_mul:6; + u64 reserved_39:1; + u64 c_mul:7; + u64 reserved_47_54:8; + u64 dis_scan:1; + u64 dis_huk:1; + u64 vrm_err:1; + u64 jt_tstmode:1; + u64 ckill_ppdis:1; + u64 trusted_mode:1; + u64 ejtagdis:1; + u64 jtcsrdis:1; + u64 chipkill:1; +#endif /* Word 0 - End */ + } cn81xx; + struct cavm_rst_boot_cn88xx { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + u64 chipkill:1; + u64 jtcsrdis:1; + u64 ejtagdis:1; + u64 trusted_mode:1; + u64 ckill_ppdis:1; + u64 jt_tstmode:1; + u64 vrm_err:1; + u64 dis_huk:1; + u64 dis_scan:1; + u64 reserved_47_54:8; + u64 c_mul:7; + u64 reserved_39:1; + u64 pnr_mul:6; + u64 lboot_oci:3; + u64 reserved_26_29:4; + u64 reserved_24_25:2; + u64 lboot_ext45:6; + u64 lboot_ext23:6; + u64 lboot:10; + u64 rboot:1; + u64 rboot_pin:1; +#else /* Word 0 - Little Endian */ + u64 rboot_pin:1; + u64 rboot:1; + u64 lboot:10; + u64 lboot_ext23:6; + u64 lboot_ext45:6; + u64 reserved_24_25:2; + u64 reserved_26_29:4; + u64 lboot_oci:3; + u64 pnr_mul:6; + u64 reserved_39:1; + u64 c_mul:7; + u64 reserved_47_54:8; + u64 dis_scan:1; + u64 dis_huk:1; + u64 vrm_err:1; + u64 jt_tstmode:1; + u64 ckill_ppdis:1; + u64 trusted_mode:1; + u64 ejtagdis:1; + u64 jtcsrdis:1; + u64 chipkill:1; +#endif /* Word 0 - End */ + } cn88xx; + struct cavm_rst_boot_cn83xx { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + u64 chipkill:1; + u64 jtcsrdis:1; + u64 ejtagdis:1; + u64 trusted_mode:1; + u64 ckill_ppdis:1; + u64 jt_tstmode:1; + u64 vrm_err:1; + u64 dis_huk:1; + u64 dis_scan:1; + u64 reserved_47_54:8; + u64 c_mul:7; + u64 reserved_39:1; + u64 pnr_mul:6; + u64 lboot_oci:3; + u64 lboot_pf_flr:4; + u64 lboot_ckill:1; + u64 lboot_jtg:1; + u64 lboot_ext45:6; + u64 lboot_ext23:6; + u64 lboot:10; + u64 rboot:1; + u64 rboot_pin:1; +#else /* Word 0 - Little Endian */ + u64 rboot_pin:1; + u64 rboot:1; + u64 lboot:10; + u64 lboot_ext23:6; + u64 lboot_ext45:6; + u64 lboot_jtg:1; + u64 lboot_ckill:1; + u64 lboot_pf_flr:4; + u64 lboot_oci:3; + u64 pnr_mul:6; + u64 reserved_39:1; + u64 c_mul:7; + u64 reserved_47_54:8; + u64 dis_scan:1; + u64 dis_huk:1; + u64 vrm_err:1; + u64 jt_tstmode:1; + u64 ckill_ppdis:1; + u64 trusted_mode:1; + u64 ejtagdis:1; + u64 jtcsrdis:1; + u64 chipkill:1; +#endif /* Word 0 - End */ + } cn83xx; +}; + +/** + * Returns the I/O clock speed in Hz + */ +u64 thunderx_get_io_clock(void); + +/** + * Returns the core clock speed in Hz + */ +u64 thunderx_get_core_clock(void); + +#endif diff --git a/arch/arm/mach-thunderx/Makefile b/arch/arm/mach-thunderx/Makefile index fb457cb3e0..b3328f4e08 100644 --- a/arch/arm/mach-thunderx/Makefile +++ b/arch/arm/mach-thunderx/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0+ -obj-y := atf.o lowlevel_init.o fdt.o +obj-y := atf.o lowlevel_init.o fdt.o misc.o diff --git a/arch/arm/mach-thunderx/misc.c b/arch/arm/mach-thunderx/misc.c new file mode 100644 index 0000000000..25ac154a9a --- /dev/null +++ b/arch/arm/mach-thunderx/misc.c @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#include <common.h> + +#include <asm/arch-thunderx/thunderx.h> +#include <asm/io.h> + +/** + * Returns the I/O clock speed in Hz + */ +u64 thunderx_get_io_clock(void) +{ + union cavm_rst_boot rst_boot; + + rst_boot.u = readq(CAVM_RST_BOOT); + + return rst_boot.s.pnr_mul * PLL_REF_CLK; +} + +/** + * Returns the core clock speed in Hz + */ +u64 thunderx_get_core_clock(void) +{ + union cavm_rst_boot rst_boot; + + rst_boot.u = readq(CAVM_RST_BOOT); + + return rst_boot.s.c_mul * PLL_REF_CLK; +}

On 22.02.19 19:03, Tim Harvey wrote:
Add Cavium Thunderx common registers, structures, and helper functions
Signed-off-by: Tim Harvey tharvey@gateworks.com
arch/arm/include/asm/arch-thunderx/thunderx.h | 300 ++++++++++++++++++ arch/arm/mach-thunderx/Makefile | 2 +- arch/arm/mach-thunderx/misc.c | 33 ++ 3 files changed, 334 insertions(+), 1 deletion(-) create mode 100644 arch/arm/include/asm/arch-thunderx/thunderx.h create mode 100644 arch/arm/mach-thunderx/misc.c
diff --git a/arch/arm/include/asm/arch-thunderx/thunderx.h b/arch/arm/include/asm/arch-thunderx/thunderx.h new file mode 100644 index 0000000000..58f36c6cdc --- /dev/null +++ b/arch/arm/include/asm/arch-thunderx/thunderx.h @@ -0,0 +1,300 @@ +// SPDX-License-Identifier: GPL-2.0+ +/*
- Copyright (C) 2018, Cavium Inc.
- */
+#ifndef __THUNDERX_H__ +#define __THUNDERX_H__
+/* Registers */ +#define CAVM_RST_BOOT 0x87e006001600ll +#define CAVM_MIO_FUS_DAT2 0x87e003001410ll +#define CAVM_XCVX_RESET 0x87e0db000000ll
+/*
- Flag bits in top byte. The top byte of MIDR_EL1 is defined
- as ox43, the Cavium implementer code. In this number, bits
- 7,5,4 are defiend as zero. We use these bits to signal
- that revision numbers should be ignored. It isn't ideal
- that these are in the middle of an already defined field,
- but this keeps the model numbers as 32 bits
- */
+#define __OM_IGNORE_REVISION 0x80000000 +#define __OM_IGNORE_MINOR_REVISION 0x20000000 +#define __OM_IGNORE_MODEL 0x10000000
+#define CAVIUM_CN88XX_PASS1_0 0x430f0a10 +#define CAVIUM_CN88XX_PASS1_1 0x430f0a11 +#define CAVIUM_CN88XX_PASS2_0 0x431f0a10 +#define CAVIUM_CN88XX_PASS2_1 0x431f0a11 +#define CAVIUM_CN88XX_PASS2_2 0x431f0a12 +#define CAVIUM_CN88XX (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN88XX_PASS1_X (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION) +#define CAVIUM_CN88XX_PASS2_X (CAVIUM_CN88XX_PASS2_0 | __OM_IGNORE_MINOR_REVISION)
+#define CAVIUM_CN83XX_PASS1_0 0x430f0a30 +#define CAVIUM_CN83XX (CAVIUM_CN83XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN83XX_PASS1_X (CAVIUM_CN83XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION)
+#define CAVIUM_CN81XX_PASS1_0 0x430f0a20 +#define CAVIUM_CN81XX (CAVIUM_CN81XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN81XX_PASS1_X (CAVIUM_CN81XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION)
+#define CAVIUM_CN98XX_PASS1_0 0x430f0b10 +#define CAVIUM_CN98XX (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_REVISION) +#define CAVIUM_CN98XX_PASS1_X (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_MINOR_REVISION)
+/* These match entire families of chips */ +#define CAVIUM_CN8XXX (CAVIUM_CN88XX_PASS1_0 | __OM_IGNORE_MODEL) +#define CAVIUM_CN9XXX (CAVIUM_CN98XX_PASS1_0 | __OM_IGNORE_MODEL)
+static inline uint64_t cavium_get_model(void) +{
- uint64_t result;
- __asm ("mrs %[rd],MIDR_EL1" : [rd] "=r" (result));
- return result;
+}
+/**
- Return non-zero if the chip matched the passed model.
- @param arg_model One of the CAVIUM_* constants for chip models and passes
- @return Non-zero if match
- */
+static inline int CAVIUM_IS_MODEL(uint32_t arg_model)
Usually upper case function names are reserved for #define'd ones. Could this be lower case?
Alex

Signed-off-by: Tim Harvey tharvey@gateworks.com --- board/cavium/thunderx/thunderx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index 57dce5aee0..0cc03a463f 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -92,13 +92,13 @@ int dram_init(void) ssize_t dram_size; int node;
- printf("Initializing\nNodes in system: %zd\n", node_count); + debug("Initializing\nNodes in system: %zd\n", node_count);
gd->ram_size = 0;
for (node = 0; node < node_count; node++) { dram_size = atf_dram_size(node); - printf("Node %d: %zd MBytes of DRAM\n", node, dram_size >> 20); + debug("Node %d: %zd MBytes of DRAM\n", node, dram_size >> 20); gd->ram_size += dram_size; }

Add Single Root I/O Virtualization (SR-IOV) and Enhanced Allocation support
Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/x86/cpu/baytrail/cpu.c | 3 +- drivers/ata/ahci.c | 8 +- drivers/i2c/designware_i2c.c | 4 +- drivers/i2c/intel_i2c.c | 3 +- drivers/mmc/pci_mmc.c | 3 +- drivers/net/e1000.c | 5 +- drivers/net/pch_gbe.c | 3 +- drivers/nvme/nvme.c | 3 +- drivers/pci/pci-uclass.c | 304 +++++++++++++++++++++++++++++++++-- drivers/usb/host/ehci-pci.c | 3 +- drivers/usb/host/xhci-pci.c | 3 +- include/pci.h | 56 ++++++- 12 files changed, 370 insertions(+), 28 deletions(-)
diff --git a/arch/x86/cpu/baytrail/cpu.c b/arch/x86/cpu/baytrail/cpu.c index 56e98131d7..cc3eae6cb2 100644 --- a/arch/x86/cpu/baytrail/cpu.c +++ b/arch/x86/cpu/baytrail/cpu.c @@ -46,6 +46,7 @@ int arch_cpu_init_dm(void) { struct udevice *dev; void *base; + size_t size; int ret; int i;
@@ -53,7 +54,7 @@ int arch_cpu_init_dm(void) for (i = 0; i < 2; i++) { ret = dm_pci_bus_find_bdf(PCI_BDF(0, 0x1e, 3 + i), &dev); if (!ret) { - base = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_0, + base = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); hsuart_clock_set(base); } diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index 5fafb63aeb..d6753f140d 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -455,8 +455,9 @@ static int ahci_init_one(struct ahci_uc_priv *uc_priv, pci_dev_t dev) uc_priv->udma_mask = 0x7f; /*Fixme,assume to support UDMA6 */
#if !defined(CONFIG_DM_SCSI) + size_t size; #ifdef CONFIG_DM_PCI - uc_priv->mmio_base = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_5, + uc_priv->mmio_base = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_5, &size, PCI_REGION_MEM);
/* Take from kernel: @@ -467,7 +468,7 @@ static int ahci_init_one(struct ahci_uc_priv *uc_priv, pci_dev_t dev) if (vendor == 0x197b) dm_pci_write_config8(dev, 0x41, 0xa1); #else - uc_priv->mmio_base = pci_map_bar(dev, PCI_BASE_ADDRESS_5, + uc_priv->mmio_base = pci_map_bar(dev, PCI_BASE_ADDRESS_5, &size, PCI_REGION_MEM);
/* Take from kernel: @@ -1189,8 +1190,9 @@ int ahci_probe_scsi(struct udevice *ahci_dev, ulong base) int ahci_probe_scsi_pci(struct udevice *ahci_dev) { ulong base; + size_t size;
- base = (ulong)dm_pci_map_bar(ahci_dev, PCI_BASE_ADDRESS_5, + base = (ulong)dm_pci_map_bar(ahci_dev, PCI_BASE_ADDRESS_5, &size, PCI_REGION_MEM);
return ahci_probe_scsi(ahci_dev, base); diff --git a/drivers/i2c/designware_i2c.c b/drivers/i2c/designware_i2c.c index dbc3326b5a..bd6f4416ab 100644 --- a/drivers/i2c/designware_i2c.c +++ b/drivers/i2c/designware_i2c.c @@ -540,8 +540,10 @@ static int designware_i2c_probe(struct udevice *bus) if (device_is_on_pci_bus(bus)) { #ifdef CONFIG_DM_PCI /* Save base address from PCI BAR */ + size_t size; priv->regs = (struct i2c_regs *) - dm_pci_map_bar(bus, PCI_BASE_ADDRESS_0, PCI_REGION_MEM); + dm_pci_map_bar(bus, PCI_BASE_ADDRESS_0, &size, + PCI_REGION_MEM); #ifdef CONFIG_X86 /* Use BayTrail specific timing values */ priv->scl_sda_cfg = &byt_config; diff --git a/drivers/i2c/intel_i2c.c b/drivers/i2c/intel_i2c.c index f5509fef16..f9de161356 100644 --- a/drivers/i2c/intel_i2c.c +++ b/drivers/i2c/intel_i2c.c @@ -247,10 +247,11 @@ static int intel_i2c_set_bus_speed(struct udevice *bus, unsigned int speed) static int intel_i2c_probe(struct udevice *dev) { struct intel_i2c *priv = dev_get_priv(dev); + size_t size; ulong base;
/* Save base address from PCI BAR */ - priv->base = (ulong)dm_pci_map_bar(dev, PCI_BASE_ADDRESS_4, + priv->base = (ulong)dm_pci_map_bar(dev, PCI_BASE_ADDRESS_4, &size, PCI_REGION_IO); base = priv->base;
diff --git a/drivers/mmc/pci_mmc.c b/drivers/mmc/pci_mmc.c index 182d41637f..84701713a7 100644 --- a/drivers/mmc/pci_mmc.c +++ b/drivers/mmc/pci_mmc.c @@ -28,9 +28,10 @@ static int pci_mmc_probe(struct udevice *dev) struct pci_mmc_plat *plat = dev_get_platdata(dev); struct pci_mmc_priv *priv = dev_get_priv(dev); struct sdhci_host *host = &priv->host; + size_t size; int ret;
- host->ioaddr = (void *)dm_pci_map_bar(dev, PCI_BASE_ADDRESS_0, + host->ioaddr = (void *)dm_pci_map_bar(dev, PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); host->name = dev->name; ret = sdhci_setup_cfg(&plat->cfg, host, 0, 0); diff --git a/drivers/net/e1000.c b/drivers/net/e1000.c index a34f697461..fbfb9052d8 100644 --- a/drivers/net/e1000.c +++ b/drivers/net/e1000.c @@ -5501,6 +5501,9 @@ static int e1000_init_one(struct e1000_hw *hw, int cardnum, pci_dev_t devno, #endif { u32 val; +#ifdef CONFIG_DM_ETH + size_t size; +#endif
/* Assign the passed-in values */ #ifdef CONFIG_DM_ETH @@ -5551,7 +5554,7 @@ static int e1000_init_one(struct e1000_hw *hw, int cardnum, pci_dev_t devno, hw->eeprom_semaphore_present = true; #endif #ifdef CONFIG_DM_ETH - hw->hw_addr = dm_pci_map_bar(devno, PCI_BASE_ADDRESS_0, + hw->hw_addr = dm_pci_map_bar(devno, PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); #else hw->hw_addr = pci_map_bar(devno, PCI_BASE_ADDRESS_0, diff --git a/drivers/net/pch_gbe.c b/drivers/net/pch_gbe.c index 2286dd07e9..3194221796 100644 --- a/drivers/net/pch_gbe.c +++ b/drivers/net/pch_gbe.c @@ -434,6 +434,7 @@ static int pch_gbe_probe(struct udevice *dev) struct pch_gbe_priv *priv; struct eth_pdata *plat = dev_get_platdata(dev); void *iobase; + size_t size; int err;
/* @@ -445,7 +446,7 @@ static int pch_gbe_probe(struct udevice *dev)
priv->dev = dev;
- iobase = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_1, PCI_REGION_MEM); + iobase = dm_pci_map_bar(dev, PCI_BASE_ADDRESS_1, &size, PCI_REGION_MEM);
plat->iobase = (ulong)iobase; priv->mac_regs = (struct pch_gbe_regs *)iobase; diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c index eb6fdeda50..49d8b12b2d 100644 --- a/drivers/nvme/nvme.c +++ b/drivers/nvme/nvme.c @@ -768,12 +768,13 @@ static int nvme_bind(struct udevice *udev) static int nvme_probe(struct udevice *udev) { int ret; + size_t size; struct nvme_dev *ndev = dev_get_priv(udev);
ndev->instance = trailing_strtol(udev->name);
INIT_LIST_HEAD(&ndev->namespaces); - ndev->bar = dm_pci_map_bar(udev, PCI_BASE_ADDRESS_0, + ndev->bar = dm_pci_map_bar(udev, PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); if (readl(&ndev->bar->csts) == -1) { ret = -ENODEV; diff --git a/drivers/pci/pci-uclass.c b/drivers/pci/pci-uclass.c index da49c96ed5..0720ffe5b4 100644 --- a/drivers/pci/pci-uclass.c +++ b/drivers/pci/pci-uclass.c @@ -599,12 +599,22 @@ int dm_pci_hose_probe_bus(struct udevice *bus) { int sub_bus; int ret; + int ea_pos; + u8 reg;
debug("%s\n", __func__);
- sub_bus = pci_get_bus_max() + 1; - debug("%s: bus = %d/%s\n", __func__, sub_bus, bus->name); - dm_pciauto_prescan_setup_bridge(bus, sub_bus); + ea_pos = dm_pci_find_capability(bus, PCI_CAP_ID_EA); + + if (ea_pos) { + dm_pci_read_config8(bus, ea_pos + sizeof(u32) + sizeof(u8), ®); + sub_bus = reg; + debug("%s: bus = %d/%s\n", __func__, sub_bus, bus->name); + } else { + sub_bus = pci_get_bus_max() + 1; + debug("%s: bus = %d/%s\n", __func__, sub_bus, bus->name); + dm_pciauto_prescan_setup_bridge(bus, sub_bus); + }
ret = device_probe(bus); if (ret) { @@ -612,13 +622,16 @@ int dm_pci_hose_probe_bus(struct udevice *bus) ret); return ret; } - if (sub_bus != bus->seq) { - printf("%s: Internal error, bus '%s' got seq %d, expected %d\n", - __func__, bus->name, bus->seq, sub_bus); - return -EPIPE; + + if (!ea_pos) { + if (sub_bus != bus->seq) { + printf("%s: Internal error, bus '%s' got seq %d, expected %d\n", + __func__, bus->name, bus->seq, sub_bus); + return -EPIPE; + } + sub_bus = pci_get_bus_max(); + dm_pciauto_postscan_setup_bridge(bus, sub_bus); } - sub_bus = pci_get_bus_max(); - dm_pciauto_postscan_setup_bridge(bus, sub_bus);
return sub_bus; } @@ -828,6 +841,7 @@ int pci_bind_bus_devices(struct udevice *bus) pplat->vendor = vendor; pplat->device = device; pplat->class = class; + pplat->is_phys = true; }
return 0; @@ -1326,14 +1340,258 @@ pci_addr_t dm_pci_phys_to_bus(struct udevice *dev, phys_addr_t phys_addr, return bus_addr; }
-void *dm_pci_map_bar(struct udevice *dev, int bar, int flags) +/* Read an Enhanced Allocation (EA) entry */ +static int dm_pci_ea_entry_read(struct udevice *dev, int offset, int *bei, + pci_addr_t *start, size_t *size) { - pci_addr_t pci_bus_addr; + u32 base; + u32 max_offset; + u8 prop; + int ent_offset = offset; + int ent_size; + u32 dw0; + + dm_pci_read_config32(dev, ent_offset, &dw0); + + debug("%s: %d: dw0: %lx\n", __FUNCTION__, __LINE__, (unsigned long)dw0); + + ent_offset += sizeof(u32); + + /* Entry size field indicates DWORDs after 1st */ + ent_size = ((dw0 & PCI_EA_ES) + 1) * sizeof(u32); + + if (!(dw0 & PCI_EA_ENABLE)) + goto out; + *bei = PCI_EA_BEI(dw0); + + prop = PCI_EA_PP(dw0); + + debug("EA property: %x\n", prop); + + /* + * If the Property is in the reserved range, try the Secondary + * Property instead. + */ + if (prop > PCI_EA_P_BRIDGE_IO && prop < PCI_EA_P_MEM_RESERVED) + prop = PCI_EA_SP(dw0); + if (prop > PCI_EA_P_BRIDGE_IO) + goto out; + + debug("EA property: %x\n", prop); + + /* Read Base */ + dm_pci_read_config32(dev, ent_offset, &base); + ent_offset += sizeof(u32); + *start = (pci_addr_t)base & PCI_EA_FIELD_MASK; + + /* Read MaxOffset */ + dm_pci_read_config32(dev, ent_offset, &max_offset); + ent_offset += sizeof(u32); + + /* Read Base MSBs (if 64-bit entry) */ + if (base & PCI_EA_IS_64) { + dm_pci_read_config32(dev, ent_offset, &base); + ent_offset += sizeof(u32); + + *start |= (pci_addr_t)base << 32; + } + + debug("EA (%u,%u) start = %lx\n", PCI_EA_BEI(dw0), prop, (unsigned long)*start); + + *size = ((size_t)max_offset | 0x03) + 1; + + /* Read MaxOffset MSBs (if 64-bit entry) */ + if (max_offset & PCI_EA_IS_64) { + dm_pci_read_config32(dev, ent_offset, &max_offset); + ent_offset += sizeof(u32); + + *size |= (size_t)max_offset << 32; + } + + debug("EA (%u,%u) size = %lx\n", PCI_EA_BEI(dw0), prop, (unsigned long)*size); + + if (*start + *size < *start) { + *size = 0; + *start = 0; + printf("EA Entry crosses address boundary\n"); + goto out; + } + + if (ent_size != ent_offset - offset) { + printf("EA Entry Size (%d) does not match length read (%d)\n", + ent_size, ent_offset - offset); + goto out; + } + +out: + return offset + ent_size; +} + +/* Read an Enhanced Allocation (EA) BAR */ +int dm_pci_ea_bar_read(struct udevice *dev, int bar, pci_addr_t *start, size_t *size) +{ + int ea; + int offset; + u8 num_ent; + u8 hdr_type; + int i, bei = -1; + + ea = dm_pci_find_capability(dev, PCI_CAP_ID_EA); + + dm_pci_read_config8(dev, ea + PCI_EA_NUM_ENT, &num_ent); + num_ent &= PCI_EA_NUM_ENT_MASK; + + offset = ea + PCI_EA_FIRST_ENT; + + dm_pci_read_config8(dev, PCI_HEADER_TYPE, &hdr_type); + + /* Skip DWORD 2 for type 1 functions */ + if (hdr_type == PCI_HEADER_TYPE_BRIDGE) + offset += sizeof(u32); + + for (i = 0; (i < num_ent) && (bar != bei); i++) { + offset = dm_pci_ea_entry_read(dev, offset, &bei, start, size); + } + + return (bar == bei); +} + +int dm_pci_sriov_init(struct udevice *pdev, int vf_en) +{ + u16 vendor, device; + struct udevice *bus; + struct udevice *dev; + pci_dev_t bdf; + u16 ctrl; + u16 num_vfs; + u16 total_vf; + u16 vf_offset; + u16 vf_stride; + int vf, ret; + int pos; + + pos = dm_pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); + if (!pos) { + printf("Error: SRIOV capability not found\n"); + return -ENODEV; + } + + dm_pci_read_config16(pdev, pos + PCI_SRIOV_CTRL, &ctrl); + + dm_pci_read_config16(pdev, pos + PCI_SRIOV_TOTAL_VF, &total_vf); + + if (vf_en > total_vf) + vf_en = total_vf; + + dm_pci_write_config16(pdev, pos + PCI_SRIOV_NUM_VF, vf_en); + + ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE; + dm_pci_write_config16(pdev, pos + PCI_SRIOV_CTRL, ctrl); + + dm_pci_read_config16(pdev, pos + PCI_SRIOV_NUM_VF, &num_vfs); + + dm_pci_read_config16(pdev, pos + PCI_SRIOV_VF_OFFSET, &vf_offset); + dm_pci_read_config16(pdev, pos + PCI_SRIOV_VF_STRIDE, &vf_stride); + + dm_pci_read_config16(pdev, PCI_VENDOR_ID, &vendor); + dm_pci_read_config16(pdev, pos + PCI_SRIOV_VF_DID, &device); + + bdf = dm_pci_get_bdf(pdev); + + pci_get_bus(PCI_BUS(bdf), &bus); + + if (!bus) + return -ENODEV; + + bdf += PCI_BDF(0, 0, vf_offset); + + for (vf = 0; vf < num_vfs; vf++) { + struct pci_child_platdata *pplat; + ulong class; + + pci_bus_read_config(bus, bdf, PCI_CLASS_REVISION, + &class, PCI_SIZE_32); + + class >>= 8; + + debug("%s: bus %d/%s: found VF %x:%x\n", __func__, + bus->seq, bus->name, PCI_DEV(bdf), PCI_FUNC(bdf)); + + /* Find this device in the device tree */ + ret = pci_bus_find_devfn(bus, PCI_MASK_BUS(bdf), &dev); + + if (ret == -ENODEV) { + struct pci_device_id find_id; + + memset(&find_id, 0, sizeof(find_id)); + + find_id.vendor = vendor; + find_id.device = device; + find_id.class = class >> 8; + + ret = pci_find_and_bind_driver(bus, &find_id, + bdf, &dev); + + if (ret) + return ret; + } + + /* Update the platform data */ + pplat = dev_get_parent_platdata(dev); + pplat->devfn = PCI_MASK_BUS(bdf); + pplat->vendor = vendor; + pplat->device = device; + pplat->class = class; + pplat->is_phys = false; + pplat->pdev = pdev; + pplat->vf_id = vf * vf_stride + vf_offset; + + bdf += PCI_BDF(0, 0, vf_stride); + } + + return 0; + +} + +void *dm_pci_map_bar(struct udevice *dev, int bar, size_t *size, int flags) +{ + int pos; + pci_addr_t pci_bus_start; u32 bar_response; + struct pci_child_platdata *pdata = dev_get_parent_platdata(dev);
- /* read BAR address */ - dm_pci_read_config32(dev, bar, &bar_response); - pci_bus_addr = (pci_addr_t)(bar_response & ~0xf); + if (!pdata->is_phys) { + if (bar < 9 || bar > 14) + return NULL; + dev = pdata->pdev; + } + + pos = dm_pci_find_capability(dev, PCI_CAP_ID_EA); + + if (pos) { + dm_pci_ea_bar_read(dev, bar, &pci_bus_start, size); + } else { + /* read BAR address */ + if (bar >= 0 && bar <= 5) { + bar = PCI_BASE_ADDRESS_0 + bar * 4; + } else if (bar >= 9 && bar <= 14) { + pos = dm_pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV); + bar = pos + PCI_SRIOV_BAR + bar * 4; + } + dm_pci_read_config32(dev, bar, + &bar_response); + pci_bus_start = (pci_addr_t)(bar_response & ~0xf); + + if ((bar_response & PCI_BASE_ADDRESS_MEM_TYPE_MASK) == + PCI_BASE_ADDRESS_MEM_TYPE_64) { + dm_pci_read_config32(dev, bar + 4, &bar_response); + } + pci_bus_start |= (pci_addr_t)bar_response << 32; + } + + if (!pdata->is_phys) { + pci_bus_start += (pdata->vf_id - 1) * (*size); + }
/* * Pass "0" as the length argument to pci_bus_to_virt. The arg @@ -1341,7 +1599,7 @@ void *dm_pci_map_bar(struct udevice *dev, int bar, int flags) * linear mapping. In the future, this could read the BAR size * and pass that as the size if needed. */ - return dm_pci_bus_to_virt(dev, pci_bus_addr, flags, 0, MAP_NOCACHE); + return dm_pci_bus_to_virt(dev, pci_bus_start, flags, 0, MAP_NOCACHE); }
int dm_pci_find_capability(struct udevice *dev, int cap) @@ -1412,6 +1670,22 @@ int dm_pci_find_ext_capability(struct udevice *dev, int cap) return 0; }
+int dm_pci_sriov_get_totalvfs(struct udevice *pdev) +{ + u16 total_vf; + int pos; + + pos = dm_pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); + if (!pos) { + printf("Error: SRIOV capability not found\n"); + return -ENODEV; + } + + dm_pci_read_config16(pdev, pos + PCI_SRIOV_TOTAL_VF, &total_vf); + + return total_vf; +} + UCLASS_DRIVER(pci) = { .id = UCLASS_PCI, .name = "pci", diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c index 6150f3d888..db55d136b6 100644 --- a/drivers/usb/host/ehci-pci.c +++ b/drivers/usb/host/ehci-pci.c @@ -26,6 +26,7 @@ static int ehci_pci_init(struct udevice *dev, struct ehci_hccr **ret_hccr, struct ehci_pci_priv *priv = dev_get_priv(dev); struct ehci_hccr *hccr; struct ehci_hcor *hcor; + size_t size; int ret; u32 cmd;
@@ -34,7 +35,7 @@ static int ehci_pci_init(struct udevice *dev, struct ehci_hccr **ret_hccr, return ret;
hccr = (struct ehci_hccr *)dm_pci_map_bar(dev, - PCI_BASE_ADDRESS_0, PCI_REGION_MEM); + PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); hcor = (struct ehci_hcor *)((uintptr_t) hccr + HC_LENGTH(ehci_readl(&hccr->cr_capbase)));
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index b995aef997..d42f06bc32 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -16,10 +16,11 @@ static void xhci_pci_init(struct udevice *dev, struct xhci_hccr **ret_hccr, { struct xhci_hccr *hccr; struct xhci_hcor *hcor; + size_t size; u32 cmd;
hccr = (struct xhci_hccr *)dm_pci_map_bar(dev, - PCI_BASE_ADDRESS_0, PCI_REGION_MEM); + PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); hcor = (struct xhci_hcor *)((uintptr_t) hccr + HC_LENGTH(xhci_readl(&hccr->cr_capbase)));
diff --git a/include/pci.h b/include/pci.h index 938a8390cb..033d5adf2a 100644 --- a/include/pci.h +++ b/include/pci.h @@ -411,6 +411,39 @@ #define PCI_MSI_DATA_32 8 /* 16 bits of data for 32-bit devices */ #define PCI_MSI_DATA_64 12 /* 16 bits of data for 64-bit devices */
+/* Single Root I/O Virtualization */ +#define PCI_SRIOV_BAR 0x24 /* VF BAR0 */ +#define PCI_SRIOV_BAR 0x24 /* VF BAR0 */ +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ +#define PCI_SRIOV_CTRL_VFE 0x01 /* VF Enable */ +#define PCI_SRIOV_CTRL_VFM 0x02 /* VF Migration Enable */ +#define PCI_SRIOV_CTRL_INTR 0x04 /* VF Migration Interrupt Enable */ +#define PCI_SRIOV_CTRL_MSE 0x08 /* VF Memory Space Enable */ +#define PCI_SRIOV_CTRL_ARI 0x10 /* ARI Capable Hierarchy */ +#define PCI_SRIOV_STATUS 0x0a /* SR-IOV Status */ +#define PCI_SRIOV_STATUS_VFM 0x01 /* VF Migration Status */ +#define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */ +#define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ +#define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */ +#define PCI_SRIOV_FUNC_LINK 0x12 /* Function Dependency Link */ +#define PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */ +#define PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */ +#define PCI_SRIOV_VF_DID 0x1a /* VF Device ID */ + +/* Enhanced Allocation (EA) */ +#define PCI_EA_NUM_ENT 2 /* Number of Capability Entries */ +#define PCI_EA_NUM_ENT_MASK 0x3f /* Num Entries Mask */ +#define PCI_EA_FIRST_ENT 4 /* First EA Entry in List */ +#define PCI_EA_ES 0x7 /* Entry Size */ +#define PCI_EA_BEI(x) (((x) >> 4) & 0xf) /* BAR Equivalent Indicator */ +#define PCI_EA_PP(x) (((x) >> 8) & 0xff) /* Primary Properties */ +#define PCI_EA_SP(x) (((x) >> 16) & 0xff) /* Secondary Properties */ +#define PCI_EA_P_BRIDGE_IO 0x07 /* Bridge I/O Space */ +#define PCI_EA_P_MEM_RESERVED 0xfd /* Reserved Memory */ +#define PCI_EA_ENABLE (1 << 31) /* Enable for this entry */ +#define PCI_EA_IS_64 (1 << 1) /* 64-bit field flag */ +#define PCI_EA_FIELD_MASK 0xfffffffc /* For Base & Max Offset */ + #define PCI_MAX_PCI_DEVICES 32 #define PCI_MAX_PCI_FUNCTIONS 8
@@ -841,6 +874,9 @@ struct pci_child_platdata { unsigned short vendor; unsigned short device; unsigned int class; + bool is_phys; + struct udevice *pdev; + int vf_id; };
/* PCI bus operations */ @@ -1307,10 +1343,11 @@ pci_addr_t dm_pci_phys_to_bus(struct udevice *dev, phys_addr_t addr, * * @dev: Device to check * @bar: Bar number to read (numbered from 0) + * @size: pointer to var to assign BAR size * @flags: Flags for the region type (PCI_REGION_...) * @return: pointer to the virtual address to use */ -void *dm_pci_map_bar(struct udevice *dev, int bar, int flags); +void *dm_pci_map_bar(struct udevice *dev, int bar, size_t *size, int flags);
/** * dm_pci_find_capability() - find a capability @@ -1357,6 +1394,23 @@ int dm_pci_find_capability(struct udevice *dev, int cap); */ int dm_pci_find_ext_capability(struct udevice *dev, int cap);
+/** + * dm_pci_sriov_init() - Enable SR-IOV virtual functions + * + * @dev: PCI device + * @vf_en: number of virtual functions to enable + * @return: 0 if successful, error otherwise + */ +int dm_pci_sriov_init(struct udevice *dev, int vf_en); + +/** + * dm_pci_sriov_get_totalvfs() - Get number of SR-IOV virtual functions + * + * @dev: PCI device + * @return: number of SR-IOV virtual functions + */ +int dm_pci_sriov_get_totalvfs(struct udevice *pdev); + #define dm_pci_virt_to_bus(dev, addr, flags) \ dm_pci_phys_to_bus(dev, (virt_to_phys(addr)), (flags)) #define dm_pci_bus_to_virt(dev, addr, flags, len, map_flags) \

Signed-off-by: Tim Harvey tharvey@gateworks.com --- include/fdtdec.h | 11 +++++++++++ lib/fdtdec.c | 13 +++++++++++++ 2 files changed, 24 insertions(+)
diff --git a/include/fdtdec.h b/include/fdtdec.h index c26df50543..b56be097eb 100644 --- a/include/fdtdec.h +++ b/include/fdtdec.h @@ -460,6 +460,17 @@ int fdtdec_get_pci_vendev(const void *blob, int node, int fdtdec_get_pci_bar32(struct udevice *dev, struct fdt_pci_addr *addr, u32 *bar);
+/** + * Look at the bus range property of a device node and return the pci bus + * range for this node. + * The property must hold one fdt_pci_addr with a lengh. + * @param blob FDT blob + * @param node node to examine + * @param res the resource structure to return the bus range + */ +int fdtdec_get_pci_bus_range(const void *blob, int node, + struct fdt_resource *res); + /** * Look up a 32-bit integer property in a node and return it. The property * must have at least 4 bytes of data. The value of the first cell is diff --git a/lib/fdtdec.c b/lib/fdtdec.c index a420ba1885..7f63b1a312 100644 --- a/lib/fdtdec.c +++ b/lib/fdtdec.c @@ -305,6 +305,19 @@ int fdtdec_get_pci_bar32(struct udevice *dev, struct fdt_pci_addr *addr,
return 0; } + +int fdtdec_get_pci_bus_range(const void *blob, int node, + struct fdt_resource *res) +{ + const u32 *values; + int len; + values = fdt_getprop(blob, node, "bus-range", &len); + if (!values || len < sizeof(*values) * 2) + return -EINVAL; + res->start = be32_to_cpup(values++); + res->end = be32_to_cpup(values); + return 0; +} #endif
uint64_t fdtdec_get_uint64(const void *blob, int node, const char *prop_name,

Signed-off-by: Tim Harvey tharvey@gateworks.com --- board/cavium/thunderx/Kconfig | 4 + board/cavium/thunderx/thunderx.c | 16 ++-- configs/thunderx_81xx_defconfig | 6 ++ configs/thunderx_88xx_defconfig | 6 ++ drivers/pci/Kconfig | 9 ++ drivers/pci/Makefile | 1 + drivers/pci/pci_thunderx.c | 160 +++++++++++++++++++++++++++++++ include/pci_ids.h | 15 +++ 8 files changed, 210 insertions(+), 7 deletions(-) create mode 100644 drivers/pci/pci_thunderx.c
diff --git a/board/cavium/thunderx/Kconfig b/board/cavium/thunderx/Kconfig index 16df1a9fc2..c840f9c1ed 100644 --- a/board/cavium/thunderx/Kconfig +++ b/board/cavium/thunderx/Kconfig @@ -1,5 +1,9 @@ if ARCH_THUNDERX
+config SYS_PCI_64BIT + bool + default y + config SYS_CPU string default "armv8" diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index 0cc03a463f..28cf2aee22 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -72,6 +72,15 @@ static struct mm_region thunderx_mem_map[] = {
struct mm_region *mem_map = thunderx_mem_map;
+#ifdef CONFIG_BOARD_EARLY_INIT_R +int board_early_init_r(void) +{ + pci_init(); + + return 0; +} +#endif + int board_init(void) { #if CONFIG_IS_ENABLED(OF_CONTROL) @@ -127,10 +136,3 @@ int board_eth_init(bd_t *bis)
return rc; } - -#ifdef CONFIG_PCI -void pci_init_board(void) -{ - printf("DEBUG: PCI Init TODO *****\n"); -} -#endif diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index 4f6b4ad18c..3ec7d6cd4f 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -12,6 +12,7 @@ CONFIG_USE_BOOTARGS=y CONFIG_BOOTARGS="console=ttyAMA0,115200n8 earlycon=pl011,0x87e028000000 debug maxcpus=4 rootwait rw root=/dev/mmcblk0p2 coherent_pool=16M" # CONFIG_DISPLAY_CPUINFO is not set # CONFIG_DISPLAY_BOARDINFO is not set +CONFIG_BOARD_EARLY_INIT_R=y CONFIG_HUSH_PARSER=y CONFIG_SYS_PROMPT="ThunderX_81XX> " # CONFIG_CMD_EXPORTENV is not set @@ -20,10 +21,15 @@ CONFIG_SYS_PROMPT="ThunderX_81XX> " # CONFIG_CMD_SAVEENV is not set # CONFIG_CMD_ENV_EXISTS is not set # CONFIG_CMD_FLASH is not set +CONFIG_CMD_PCI=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y # CONFIG_MMC is not set +CONFIG_PCI=y +CONFIG_DM_PCI=y +CONFIG_DM_PCI_COMPAT=y +CONFIG_PCI_THUNDERX=y CONFIG_DM_SERIAL=y CONFIG_DEBUG_UART_PL011=y CONFIG_DEBUG_UART_SKIP_INIT=y diff --git a/configs/thunderx_88xx_defconfig b/configs/thunderx_88xx_defconfig index fe4643f52e..e30d549896 100644 --- a/configs/thunderx_88xx_defconfig +++ b/configs/thunderx_88xx_defconfig @@ -12,6 +12,7 @@ CONFIG_USE_BOOTARGS=y CONFIG_BOOTARGS="console=ttyAMA0,115200n8 earlycon=pl011,0x87e024000000 debug maxcpus=48 rootwait rw root=/dev/sda2 coherent_pool=16M" # CONFIG_DISPLAY_CPUINFO is not set # CONFIG_DISPLAY_BOARDINFO is not set +CONFIG_BOARD_EARLY_INIT_R=y CONFIG_HUSH_PARSER=y # CONFIG_AUTO_COMPLETE is not set CONFIG_SYS_PROMPT="ThunderX_88XX> " @@ -21,10 +22,15 @@ CONFIG_SYS_PROMPT="ThunderX_88XX> " # CONFIG_CMD_SAVEENV is not set # CONFIG_CMD_ENV_EXISTS is not set # CONFIG_CMD_FLASH is not set +CONFIG_CMD_PCI=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-88xx" CONFIG_DM=y # CONFIG_MMC is not set +CONFIG_PCI=y +CONFIG_DM_PCI=y +CONFIG_DM_PCI_COMPAT=y +CONFIG_PCI_THUNDERX=y CONFIG_DM_SERIAL=y CONFIG_DEBUG_UART_PL011=y CONFIG_DEBUG_UART_SKIP_INIT=y diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index f59803dbd6..7be97addc4 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -90,6 +90,15 @@ config PCI_TEGRA with a total of 5 lanes. Some boards require this for Ethernet support to work (e.g. beaver, jetson-tk1).
+config PCI_THUNDERX + bool "ThunderX PCI support" + depends on ARCH_THUNDERX + select PCIE_ECAM_GENERIC + help + Enable support for the Cavium ThunderX SoC family PCI controllers. + These controllers provide PCI configuration access to all on-board + peripherals so it should only be disabled for testing purposes + config PCI_XILINX bool "Xilinx AXI Bridge for PCI Express" depends on DM_PCI diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 4923641895..0974e77789 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -31,6 +31,7 @@ obj-$(CONFIG_PCI_TEGRA) += pci_tegra.o obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o obj-$(CONFIG_PCIE_DW_MVEBU) += pcie_dw_mvebu.o obj-$(CONFIG_PCIE_LAYERSCAPE) += pcie_layerscape.o +obj-$(CONFIG_PCI_THUNDERX) += pci_thunderx.o obj-$(CONFIG_PCIE_LAYERSCAPE) += pcie_layerscape_fixup.o obj-$(CONFIG_PCI_XILINX) += pcie_xilinx.o obj-$(CONFIG_PCIE_INTEL_FPGA) += pcie_intel_fpga.o diff --git a/drivers/pci/pci_thunderx.c b/drivers/pci/pci_thunderx.c new file mode 100644 index 0000000000..f22141ca44 --- /dev/null +++ b/drivers/pci/pci_thunderx.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ +#include <common.h> +#include <dm.h> +#include <pci.h> + +#include <asm/io.h> + +struct thunderx_pci { + unsigned int type; + struct fdt_resource cfg; + struct fdt_resource bus; +}; + +static int pci_thunderx_pem_read_config(struct udevice *bus, pci_dev_t bdf, + uint offset, ulong *valuep, + enum pci_size_t size) +{ + struct thunderx_pci *pcie = (void *)dev_get_priv(bus); + struct pci_controller *hose = dev_get_uclass_priv(bus); + uintptr_t address; + u32 b, d, f; + u8 hdrtype; + u8 pri_bus = pcie->bus.start + 1 - hose->first_busno; + u32 bus_offs = (pri_bus << 16) | (pri_bus << 8) | (pri_bus << 0); + + b = PCI_BUS(bdf) + 1 - hose->first_busno; + d = PCI_DEV(bdf); + f = PCI_FUNC(bdf); + + address = (b << 24) | (d << 19) | (f << 16); + + address += pcie->cfg.start; + + *valuep = pci_conv_32_to_size(~0UL, offset, size); + + if (b == 1 && d > 0) + return 0; + + switch (size) { + case PCI_SIZE_8: + *valuep = readb(address + offset); + break; + case PCI_SIZE_16: + *valuep = readw(address + offset); + break; + case PCI_SIZE_32: + *valuep = readl(address + offset); + break; + default: + printf("Invalid size\n"); + } + + hdrtype = readb(address + PCI_HEADER_TYPE); + + if ((hdrtype == PCI_HEADER_TYPE_BRIDGE) && + (offset >= PCI_PRIMARY_BUS) && + (offset <= PCI_SUBORDINATE_BUS) && + *valuep != pci_conv_32_to_size(~0UL, offset, size)) { + *valuep -= pci_conv_32_to_size(bus_offs, offset, size); + } + return 0; +} + +static int pci_thunderx_pem_write_config(struct udevice *bus, pci_dev_t bdf, + uint offset, ulong value, + enum pci_size_t size) +{ + struct thunderx_pci *pcie = (void *)dev_get_priv(bus); + struct pci_controller *hose = dev_get_uclass_priv(bus); + uintptr_t address; + u32 b, d, f; + u8 hdrtype; + u8 pri_bus = pcie->bus.start + 1 - hose->first_busno; + u32 bus_offs = (pri_bus << 16) | (pri_bus << 8) | (pri_bus << 0); + + b = PCI_BUS(bdf) + 1 - hose->first_busno; + d = PCI_DEV(bdf); + f = PCI_FUNC(bdf); + + address = (b << 24) | (d << 19) | (f << 16); + + address += pcie->cfg.start; + + hdrtype = readb(address + PCI_HEADER_TYPE); + + if ((hdrtype == PCI_HEADER_TYPE_BRIDGE) && + (offset >= PCI_PRIMARY_BUS) && + (offset <= PCI_SUBORDINATE_BUS) && + (value != pci_conv_32_to_size(~0UL, offset, size))) { + value += pci_conv_32_to_size(bus_offs, offset, size); + } + + if (b == 1 && d > 0) + return 0; + + switch (size) { + case PCI_SIZE_8: + writeb(value, address + offset); + break; + case PCI_SIZE_16: + writew(value, address + offset); + break; + case PCI_SIZE_32: + writel(value, address + offset); + break; + default: + printf("Invalid size\n"); + } + return 0; +} + +static int pci_thunderx_ofdata_to_platdata(struct udevice *dev) +{ + return 0; +} + +static int pci_thunderx_probe(struct udevice *dev) +{ + struct thunderx_pci *pcie = (void *)dev_get_priv(dev); + int err; + + err = fdt_get_resource(gd->fdt_blob, dev->node.of_offset, "reg", 0, + &pcie->cfg); + if (err) { + printf("Error reading resource: %s\n", fdt_strerror(err)); + return err; + } + + err = fdtdec_get_pci_bus_range(gd->fdt_blob, dev->node.of_offset, + &pcie->bus); + if (err) { + printf("Error reading resource: %s\n", fdt_strerror(err)); + return err; + } + + return 0; +} + +static const struct dm_pci_ops pci_thunderx_pem_ops = { + .read_config = pci_thunderx_pem_read_config, + .write_config = pci_thunderx_pem_write_config, +}; + +static const struct udevice_id pci_thunderx_pem_ids[] = { + { .compatible = "cavium,pci-host-thunder-pem" }, + { } +}; + +U_BOOT_DRIVER(pci_thunderx_pcie) = { + .name = "pci_thunderx_pem", + .id = UCLASS_PCI, + .of_match = pci_thunderx_pem_ids, + .ops = &pci_thunderx_pem_ops, + .ofdata_to_platdata = pci_thunderx_ofdata_to_platdata, + .probe = pci_thunderx_probe, + .priv_auto_alloc_size = sizeof(struct thunderx_pci), +}; diff --git a/include/pci_ids.h b/include/pci_ids.h index fdda679cc0..948771271e 100644 --- a/include/pci_ids.h +++ b/include/pci_ids.h @@ -3109,6 +3109,21 @@
#define PCI_VENDOR_ID_3COM_2 0xa727
+#define PCI_VENDOR_ID_CAVIUM 0x177d +#define PCI_DEVICE_ID_THUNDERX_NIC_VF_1 0x0011 +#define PCI_DEVICE_ID_THUNDERX_GPIO 0xa00a +#define PCI_DEVICE_ID_THUNDERX_SPI 0xa00b +#define PCI_DEVICE_ID_THUNDERX_MMC 0xa010 +#define PCI_DEVICE_ID_THUNDERX_TWSI 0xa012 +#define PCI_DEVICE_ID_THUNDERX_AHCI 0xa01c +#define PCI_DEVICE_ID_THUNDERX_NIC_PF 0xa01e +#define PCI_DEVICE_ID_THUNDERX_SMI 0xa02b +#define PCI_DEVICE_ID_THUNDERX_BGX 0xa026 +#define PCI_DEVICE_ID_THUNDERX_NIC_VF 0xa034 +#define PCI_DEVICE_ID_THUNDERX_RGX 0xa054 +#define PCI_DEVICE_ID_THUNDERX_XHCI 0xa055 +#define PCI_DEVICE_ID_THUNDERX_NIC_XCV 0xa056 + #define PCI_VENDOR_ID_DIGIUM 0xd161 #define PCI_DEVICE_ID_DIGIUM_HFC4S 0xb410

TODO: - determine proper workaround for disabling found_multi - determine proper workaround for decode_regions
Signed-off-by: Tim Harvey tharvey@gateworks.com --- drivers/pci/pci-uclass.c | 12 ++++++++++++ include/pci.h | 2 +- 2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pci-uclass.c b/drivers/pci/pci-uclass.c index 0720ffe5b4..845e280a60 100644 --- a/drivers/pci/pci-uclass.c +++ b/drivers/pci/pci-uclass.c @@ -783,8 +783,11 @@ int pci_bind_bus_devices(struct udevice *bus) struct udevice *dev; ulong class;
+/* causes devices beyond the internal bridge on the Octeon TX to not enum */ +#if !defined(CONFIG_ARCH_THUNDERX) if (!PCI_FUNC(bdf)) found_multi = false; +#endif if (PCI_FUNC(bdf) && !found_multi) continue; /* Check only the first access, we don't expect problems */ @@ -910,6 +913,9 @@ static void decode_regions(struct pci_controller *hose, ofnode parent_node, continue; }
+#if defined(CONFIG_ARCH_THUNDERX) + pos = hose->region_count++; +#else pos = -1; for (i = 0; i < hose->region_count; i++) { if (hose->regions[i].flags == type) @@ -917,10 +923,16 @@ static void decode_regions(struct pci_controller *hose, ofnode parent_node, } if (pos == -1) pos = hose->region_count++; +#endif debug(" - type=%d, pos=%d\n", type, pos); pci_set_region(hose->regions + pos, pci_addr, addr, size, type); }
+ if (hose->region_count == MAX_PCI_REGIONS) { + printf("PCI region count reached limit, cannot add local memory region"); + return; + } + /* Add a region for our local memory */ #ifdef CONFIG_NR_DRAM_BANKS bd_t *bd = gd->bd; diff --git a/include/pci.h b/include/pci.h index 033d5adf2a..38d44d5b67 100644 --- a/include/pci.h +++ b/include/pci.h @@ -567,7 +567,7 @@ extern void pci_cfgfunc_do_nothing(struct pci_controller* hose, pci_dev_t dev, extern void pci_cfgfunc_config_device(struct pci_controller* hose, pci_dev_t dev, struct pci_config_table *);
-#define MAX_PCI_REGIONS 7 +#define MAX_PCI_REGIONS 10
#define INDIRECT_TYPE_NO_PCIE_LINK 1

Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/include/asm/io.h | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 5df74728de..5699e6f23a 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -172,6 +172,14 @@ static inline void __raw_readsl(unsigned long addr, void *data, int longlen) #define clrsetbits(type, addr, clear, set) \ out_##type((addr), (in_##type(addr) & ~(clear)) | (set))
+#define clrbits_be64(addr, clear) clrbits(be64, addr, clear) +#define setbits_be64(addr, set) setbits(be64, addr, set) +#define clrsetbits_be64(addr, clear, set) clrsetbits(be64, addr, clear, set) + +#define clrbits_le64(addr, clear) clrbits(le64, addr, clear) +#define setbits_le64(addr, set) setbits(le64, addr, set) +#define clrsetbits_le64(addr, clear, set) clrsetbits(le64, addr, clear, set) + #define clrbits_be32(addr, clear) clrbits(be32, addr, clear) #define setbits_be32(addr, set) setbits(be32, addr, set) #define clrsetbits_be32(addr, clear, set) clrsetbits(be32, addr, clear, set)

Signed-off-by: Tim Harvey tharvey@gateworks.com --- configs/thunderx_81xx_defconfig | 4 + drivers/gpio/Kconfig | 7 ++ drivers/gpio/Makefile | 1 + drivers/gpio/thunderx_gpio.c | 189 ++++++++++++++++++++++++++++++++ 4 files changed, 201 insertions(+) create mode 100644 drivers/gpio/thunderx_gpio.c
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index 3ec7d6cd4f..d07b0e5804 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -21,10 +21,14 @@ CONFIG_SYS_PROMPT="ThunderX_81XX> " # CONFIG_CMD_SAVEENV is not set # CONFIG_CMD_ENV_EXISTS is not set # CONFIG_CMD_FLASH is not set +CONFIG_CMD_GPIO=y CONFIG_CMD_PCI=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y +CONFIG_DM_GPIO=y +CONFIG_LED=y +CONFIG_LED_GPIO=y # CONFIG_MMC is not set CONFIG_PCI=y CONFIG_DM_PCI=y diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig index 5cd8b34400..b34f66969a 100644 --- a/drivers/gpio/Kconfig +++ b/drivers/gpio/Kconfig @@ -238,6 +238,13 @@ config PIC32_GPIO help Say yes here to support Microchip PIC32 GPIOs.
+config THUNDERX_GPIO + bool "Cavium ThunderX GPIO driver" + depends on DM_GPIO + default y + help + Say yes here to support Cavium ThunderX GPIOs. + config STM32F7_GPIO bool "ST STM32 GPIO driver" depends on DM_GPIO && (STM32 || ARCH_STM32MP) diff --git a/drivers/gpio/Makefile b/drivers/gpio/Makefile index f186120684..4c22c5f1d2 100644 --- a/drivers/gpio/Makefile +++ b/drivers/gpio/Makefile @@ -54,6 +54,7 @@ obj-$(CONFIG_HIKEY_GPIO) += hi6220_gpio.o obj-$(CONFIG_HSDK_CREG_GPIO) += hsdk-creg-gpio.o obj-$(CONFIG_IMX_RGPIO2P) += imx_rgpio2p.o obj-$(CONFIG_PIC32_GPIO) += pic32_gpio.o +obj-$(CONFIG_THUNDERX_GPIO) += thunderx_gpio.o obj-$(CONFIG_MVEBU_GPIO) += mvebu_gpio.o obj-$(CONFIG_MSM_GPIO) += msm_gpio.o obj-$(CONFIG_$(SPL_)PCF8575_GPIO) += pcf8575_gpio.o diff --git a/drivers/gpio/thunderx_gpio.c b/drivers/gpio/thunderx_gpio.c new file mode 100644 index 0000000000..5d38ddccdd --- /dev/null +++ b/drivers/gpio/thunderx_gpio.c @@ -0,0 +1,189 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#include <common.h> +#include <dm.h> +#include <malloc.h> +#include <errno.h> +#include <fdtdec.h> +#include <asm/io.h> +#include <asm/gpio.h> +#include <asm/bitops.h> +#include <dt-bindings/gpio/gpio.h> + +/* Returns the bit value to write or read based on the offset */ +#define GPIO_BIT(x) (1ULL << ((x) & 0x3f)) + +#define GPIO_RX_DAT (0x0) +#define GPIO_TX_SET (0x8) +#define GPIO_TX_CLR (0x10) +#define GPIO_CONST (0x90) +#define GPIO_RX1_DAT (0x1400) +#define GPIO_TX1_SET (0x1408) +#define GPIO_TX1_CLR (0x1410) + +/** Returns the offset to the output register based on the offset and value */ +#define GPIO_TX_REG(offset, value) \ + ((offset) >= 64 ? ((value) ? GPIO_TX1_SET : GPIO_TX1_CLR) : \ + ((value) ? GPIO_TX_SET : GPIO_TX_CLR)) + +/** Returns the offset to the input data register based on the offset */ +#define GPIO_RX_DAT_REG(offset) ((offset >= 64) ? GPIO_RX1_DAT : GPIO_RX_DAT) + +/** Returns the bit configuration register based on the offset */ +#define GPIO_BIT_CFG(x) (0x400 + 8 * (x)) +#define GPIO_BIT_CFG_FN(x) (((x) >> 16) & 0x3ff) +#define GPIO_BIT_CFG_TX_OE(x) ((x) & 0x1) +#define GPIO_BIT_CFG_RX_DAT(x) ((x) & 0x1) + +union gpio_const { + u64 u; + struct { + u64 gpios:8; /** Number of GPIOs implemented */ + u64 pp:8; /** Number of PP vectors */ + u64:48; /* Reserved */ + } s; +}; + +struct thunderx_gpio { + void __iomem *baseaddr; +}; + +static int thunderx_gpio_dir_input(struct udevice *dev, unsigned offset) +{ + struct thunderx_gpio *gpio = dev_get_priv(dev); + dev_dbg(dev, "%s: offset=%u\n", __func__, offset); + clrbits_le64(gpio->baseaddr + GPIO_BIT_CFG(offset), + (0x3ffUL << 16) | 4UL | 1UL); + return 0; +} + +static int thunderx_gpio_dir_output(struct udevice *dev, unsigned offset, + int value) +{ + struct thunderx_gpio *gpio = dev_get_priv(dev); + + dev_dbg(dev, "%s: offset=%u value=%d\n", __func__, offset, value); + writeq(GPIO_BIT(offset), gpio->baseaddr + GPIO_TX_REG(offset, value)); + + clrsetbits_le64(gpio->baseaddr + GPIO_BIT_CFG(offset), + ((0x3ffUL << 16) | 4UL), 1UL); + return 0; +} + +static int thunderx_gpio_get_value(struct udevice *dev, + unsigned offset) +{ + struct thunderx_gpio *gpio = dev_get_priv(dev); + u64 reg = readq(gpio->baseaddr + GPIO_RX_DAT_REG(offset)); + + dev_dbg(dev, "%s: offset=%u value=%d\n", __func__, offset, + !!(reg & GPIO_BIT(offset))); + + return !!(reg & GPIO_BIT(offset)); +} + +static int thunderx_gpio_set_value(struct udevice *dev, + unsigned offset, int value) +{ + struct thunderx_gpio *gpio = dev_get_priv(dev); + + dev_dbg(dev, "%s: offset=%u value=%d\n", __func__, offset, value); + writeq(GPIO_BIT(offset), gpio->baseaddr + GPIO_TX_REG(offset, value)); + + return 0; + +} + +static int thunderx_gpio_get_function(struct udevice *dev, + unsigned offset) +{ + struct thunderx_gpio *gpio = dev_get_priv(dev); + u64 pinsel = readl(gpio->baseaddr + GPIO_BIT_CFG(offset)); + + dev_dbg(dev, "%s: offset=%u pinsel:0x%llx\n", __func__, offset, pinsel); + if (GPIO_BIT_CFG_FN(pinsel)) + return GPIOF_FUNC; + else if (GPIO_BIT_CFG_TX_OE(pinsel)) + return GPIOF_OUTPUT; + else + return GPIOF_INPUT; +} + +static int thunderx_gpio_xlate(struct udevice *dev, struct gpio_desc *desc, + struct ofnode_phandle_args *args) +{ + if (args->args_count < 1) + return -EINVAL; + + desc->offset = args->args[0]; + desc->flags = 0; + if (args->args_count > 1) { + if (args->args[1] & GPIO_ACTIVE_LOW) + desc->flags |= GPIOD_ACTIVE_LOW; + /* In the future add tri-state flag support */ + } + return 0; +} + +static const struct dm_gpio_ops thunderx_gpio_ops = { + .direction_input = thunderx_gpio_dir_input, + .direction_output = thunderx_gpio_dir_output, + .get_value = thunderx_gpio_get_value, + .set_value = thunderx_gpio_set_value, + .get_function = thunderx_gpio_get_function, + .xlate = thunderx_gpio_xlate, +}; + +static int thunderx_pci_gpio_probe(struct udevice *dev) +{ + struct gpio_dev_priv *uc_priv = dev_get_uclass_priv(dev); + struct thunderx_gpio *priv = dev_get_priv(dev); + pci_dev_t bdf = dm_pci_get_bdf(dev); + union gpio_const gpio_const; + size_t size; + + dev->req_seq = PCI_FUNC(bdf); + priv->baseaddr = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + if (!priv->baseaddr) { + dev_err(dev, "%s: Could not get base address\n", __func__); + return -EINVAL; + } + + gpio_const.u = readq(priv->baseaddr + GPIO_CONST); + + dev_dbg(dev, "%s: base address:%p of_offset:%ld pin count: %d\n", + __func__, priv->baseaddr, dev->node.of_offset, + gpio_const.s.gpios); + + uc_priv->gpio_count = gpio_const.s.gpios; + uc_priv->bank_name = strdup(dev->name); + + return 0; +} + +static const struct udevice_id thunderx_gpio_ids[] = { + { .compatible = "cavium,thunder-8890-gpio" }, + { .compatible = "cavium,gpio" }, + { .compatible = "cavium,thunderx-gpio" }, + { } +}; + +U_BOOT_DRIVER(thunderx_pci_gpio) = { + .name = "gpio_thunderx", + .id = UCLASS_GPIO, + .of_match = of_match_ptr(thunderx_gpio_ids), + .probe = thunderx_pci_gpio_probe, + .priv_auto_alloc_size = sizeof(struct thunderx_gpio), + .ops = &thunderx_gpio_ops, +}; + +static struct pci_device_id thunderx_pci_gpio_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_GPIO) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_pci_gpio, thunderx_pci_gpio_supported); +

Signed-off-by: Tim Harvey tharvey@gateworks.com --- configs/thunderx_81xx_defconfig | 2 + drivers/i2c/Kconfig | 7 + drivers/i2c/Makefile | 1 + drivers/i2c/thunderx_i2c.c | 878 ++++++++++++++++++++++++++++++++ include/configs/thunderx_81xx.h | 2 + 5 files changed, 890 insertions(+) create mode 100644 drivers/i2c/thunderx_i2c.c
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index d07b0e5804..e43aa9750d 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -22,11 +22,13 @@ CONFIG_SYS_PROMPT="ThunderX_81XX> " # CONFIG_CMD_ENV_EXISTS is not set # CONFIG_CMD_FLASH is not set CONFIG_CMD_GPIO=y +CONFIG_CMD_I2C=y CONFIG_CMD_PCI=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y CONFIG_DM_GPIO=y +CONFIG_DM_I2C=y CONFIG_LED=y CONFIG_LED_GPIO=y # CONFIG_MMC is not set diff --git a/drivers/i2c/Kconfig b/drivers/i2c/Kconfig index 1ef22e6bcd..0424b54eb4 100644 --- a/drivers/i2c/Kconfig +++ b/drivers/i2c/Kconfig @@ -374,6 +374,13 @@ config SYS_I2C_SANDBOX bus. Devices can be attached to the bus using the device tree which specifies the driver to use. See sandbox.dts as an example.
+config SYS_I2C_THUNDERX + bool "ThunderX i2c driver" + depends on ARCH_THUNDERX && DM_I2C + default y + help + Enable I2C support for Cavium ThunderX line of processors. + config SYS_I2C_S3C24X0 bool "Samsung I2C driver" depends on ARCH_EXYNOS4 && DM_I2C diff --git a/drivers/i2c/Makefile b/drivers/i2c/Makefile index d3637bcd8d..27087f1814 100644 --- a/drivers/i2c/Makefile +++ b/drivers/i2c/Makefile @@ -34,6 +34,7 @@ obj-$(CONFIG_SYS_I2C_SH) += sh_i2c.o obj-$(CONFIG_SYS_I2C_SOFT) += soft_i2c.o obj-$(CONFIG_SYS_I2C_STM32F7) += stm32f7_i2c.o obj-$(CONFIG_SYS_I2C_TEGRA) += tegra_i2c.o +obj-$(CONFIG_SYS_I2C_THUNDERX) += thunderx_i2c.o obj-$(CONFIG_SYS_I2C_UNIPHIER) += i2c-uniphier.o obj-$(CONFIG_SYS_I2C_UNIPHIER_F) += i2c-uniphier-f.o obj-$(CONFIG_SYS_I2C_VERSATILE) += i2c-versatile.o diff --git a/drivers/i2c/thunderx_i2c.c b/drivers/i2c/thunderx_i2c.c new file mode 100644 index 0000000000..bac442d3fa --- /dev/null +++ b/drivers/i2c/thunderx_i2c.c @@ -0,0 +1,878 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#include <common.h> +#include <i2c.h> +#include <dm.h> +#include <asm/io.h> +#include <asm/arch-thunderx/thunderx.h> + +/* + * Slave address to use for Thunder when accessed by another master + */ +#ifndef CONFIG_SYS_I2C_THUNDERX_SLAVE_ADDR +# define CONFIG_SYS_I2C_THUNDERX_SLAVE_ADDR 0x77 +#endif + +#define TWSI_THP 24 + +#define TWSI_SW_TWSI 0x1000 +#define TWSI_TWSI_SW 0x1008 +#define TWSI_INT 0x1010 +#define TWSI_SW_TWSI_EXT 0x1018 + +union twsx_sw_twsi { + u64 u; + struct { + u64 data:32; + u64 eop_ia:3; + u64 ia:5; + u64 addr:10; + u64 scr:2; + u64 size:3; + u64 sovr:1; + u64 r:1; + u64 op:4; + u64 eia:1; + u64 slonly:1; + u64 v:1; + } s; +}; + +union twsx_sw_twsi_ext { + u64 u; + struct { + u64 data:32; + u64 ia:8; + u64 :24; + } s; +}; + +union twsx_int { + u64 u; + struct { + u64 st_int:1; /** TWSX_SW_TWSI register update int */ + u64 ts_int:1; /** TWSX_TWSI_SW register update int */ + u64 core_int:1; /** TWSI core interrupt, ignored for HLC */ + u64 :5; /** Reserved */ + u64 sda_ovr:1; /** SDA testing override */ + u64 scl_ovr:1; /** SCL testing override */ + u64 sda:1; /** SDA signal */ + u64 scl:1; /** SCL signal */ + u64 :52; /** Reserved */ + } s; +}; + +enum { + TWSI_OP_WRITE = 0, + TWSI_OP_READ = 1, +}; + +enum { + TWSI_EOP_SLAVE_ADDR = 0, + TWSI_EOP_CLK_CTL = 3, + TWSI_SW_EOP_IA = 6, +}; + +enum { + TWSI_SLAVEADD = 0, + TWSI_DATA = 1, + TWSI_CTL = 2, + TWSI_CLKCTL = 3, + TWSI_STAT = 3, + TWSI_SLAVEADD_EXT = 4, + TWSI_RST = 7, +}; + +enum { + TWSI_CTL_AAK = BIT(2), + TWSI_CTL_IFLG = BIT(3), + TWSI_CTL_STP = BIT(4), + TWSI_CTL_STA = BIT(5), + TWSI_CTL_ENAB = BIT(6), + TWSI_CTL_CE = BIT(7), +}; + +enum { + /** Bus error */ + TWSI_STAT_BUS_ERROR = 0x00, + /** Start condition transmitted */ + TWSI_STAT_START = 0x08, + /** Repeat start condition transmitted */ + TWSI_STAT_RSTART = 0x10, + /** Address + write bit transmitted, ACK received */ + TWSI_STAT_TXADDR_ACK = 0x18, + /** Address + write bit transmitted, /ACK received */ + TWSI_STAT_TXADDR_NAK = 0x20, + /** Data byte transmitted in master mode, ACK received */ + TWSI_STAT_TXDATA_ACK = 0x28, + /** Data byte transmitted in master mode, ACK received */ + TWSI_STAT_TXDATA_NAK = 0x30, + /** Arbitration lost in address or data byte */ + TWSI_STAT_TX_ARB_LOST = 0x38, + /** Address + read bit transmitted, ACK received */ + TWSI_STAT_RXADDR_ACK = 0x40, + /** Address + read bit transmitted, /ACK received */ + TWSI_STAT_RXADDR_NAK = 0x48, + /** Data byte received in master mode, ACK transmitted */ + TWSI_STAT_RXDATA_ACK_SENT = 0x50, + /** Data byte received, NACK transmitted */ + TWSI_STAT_RXDATA_NAK_SENT = 0x58, + /** Slave address received, sent ACK */ + TWSI_STAT_SLAVE_RXADDR_ACK = 0x60, + /** + * Arbitration lost in address as master, slave address + write bit + * received, ACK transmitted + */ + TWSI_STAT_TX_ACK_ARB_LOST = 0x68, + /** General call address received, ACK transmitted */ + TWSI_STAT_RX_GEN_ADDR_ACK = 0x70, + /** + * Arbitration lost in address as master, general call address + * received, ACK transmitted + */ + TWSI_STAT_RX_GEN_ADDR_ARB_LOST = 0x78, + /** Data byte received after slave address received, ACK transmitted */ + TWSI_STAT_SLAVE_RXDATA_ACK = 0x80, + /** Data byte received after slave address received, /ACK transmitted */ + TWSI_STAT_SLAVE_RXDATA_NAK = 0x88, + /** + * Data byte received after general call address received, ACK + * transmitted + */ + TWSI_STAT_GEN_RXADDR_ACK = 0x90, + /** + * Data byte received after general call address received, /ACK + * transmitted + */ + TWSI_STAT_GEN_RXADDR_NAK = 0x98, + /** STOP or repeated START condition received in slave mode */ + TWSI_STAT_STOP_MULTI_START = 0xA0, + /** Slave address + read bit received, ACK transmitted */ + TWSI_STAT_SLAVE_RXADDR2_ACK = 0xA8, + /** + * Arbitration lost in address as master, slave address + read bit + * received, ACK transmitted + */ + TWSI_STAT_RXDATA_ACK_ARB_LOST = 0xB0, + /** Data byte transmitted in slave mode, ACK received */ + TWSI_STAT_SLAVE_TXDATA_ACK = 0xB8, + /** Data byte transmitted in slave mode, /ACK received */ + TWSI_STAT_SLAVE_TXDATA_NAK = 0xC0, + /** Last byte transmitted in slave mode, ACK received */ + TWSI_STAT_SLAVE_TXDATA_END_ACK = 0xC8, + /** Second address byte + write bit transmitted, ACK received */ + TWSI_STAT_TXADDR2DATA_ACK = 0xD0, + /** Second address byte + write bit transmitted, /ACK received */ + TWSI_STAT_TXADDR2DATA_NAK = 0xD8, + /** No relevant status information */ + TWSI_STAT_IDLE = 0xF8 +}; + +struct thunderx_twsi { + int id; + int speed; + void *baseaddr; +}; + +/** Last i2c id assigned */ +static int last_id = 0; + +static void twsi_unblock(void *baseaddr); +static int twsi_stop(void *baseaddr); + + +/** + * Converts the i2c status to a meaningful string + * + * @param status status to convert + * + * @return string representation of the status + */ +static const char *twsi_i2c_status_str(uint8_t status) +{ + switch (status) { + case TWSI_STAT_BUS_ERROR: + return "Bus error"; + case TWSI_STAT_START: + return "START condition transmitted"; + case TWSI_STAT_RSTART: + return "Repeated START condition transmitted"; + case TWSI_STAT_TXADDR_ACK: + return "Address + write bit transmitted, ACK received"; + case TWSI_STAT_TXADDR_NAK: + return "Address + write bit transmitted, NAK received"; + case TWSI_STAT_TXDATA_ACK: + return "Data byte transmitted in master mode, ACK received"; + case TWSI_STAT_TXDATA_NAK: + return "Data byte transmitted in master mode, NAK received"; + case TWSI_STAT_TX_ARB_LOST: + return "Arbitration lost in address or data byte"; + case TWSI_STAT_RXADDR_ACK: + return "Address + read bit transmitted, ACK received"; + case TWSI_STAT_RXADDR_NAK: + return "Address + read bit transmitted, NAK received"; + case TWSI_STAT_RXDATA_ACK_SENT: + return "Data byte received in master mode, ACK transmitted"; + case TWSI_STAT_RXDATA_NAK_SENT: + return "Data byte received in master mode, NAK transmitted"; + case TWSI_STAT_SLAVE_RXADDR_ACK: + return "Slave address + write bit received, ACK transmitted"; + case TWSI_STAT_TX_ACK_ARB_LOST: + return "Arbitration lost in address as master, slave address + write bit received, ACK transmitted"; + case TWSI_STAT_RX_GEN_ADDR_ACK: + return "General call address received, ACK transmitted"; + case TWSI_STAT_RX_GEN_ADDR_ARB_LOST: + return "Arbitration lost in address as master, general call address received, ACK transmitted"; + case TWSI_STAT_SLAVE_RXDATA_ACK: + return "Data byte received after slave address received, ACK transmitted"; + case TWSI_STAT_SLAVE_RXDATA_NAK: + return "Data byte received after slave address received, NAK transmitted"; + case TWSI_STAT_GEN_RXADDR_ACK: + return "Data byte received after general call address received, ACK transmitted"; + case TWSI_STAT_GEN_RXADDR_NAK: + return "Data byte received after general call address received, NAK transmitted"; + case TWSI_STAT_STOP_MULTI_START: + return "STOP or repeated START condition received in slave mode"; + case TWSI_STAT_SLAVE_RXADDR2_ACK: + return "Slave address + read bit received, ACK transmitted"; + case TWSI_STAT_RXDATA_ACK_ARB_LOST: + return "Arbitration lost in address as master, slave address + read bit received, ACK transmitted"; + case TWSI_STAT_SLAVE_TXDATA_ACK: + return "Data byte transmitted in slave mode, ACK received"; + case TWSI_STAT_SLAVE_TXDATA_NAK: + return "Data byte transmitted in slave mode, NAK received"; + case TWSI_STAT_SLAVE_TXDATA_END_ACK: + return "Last byte transmitted in slave mode, ACK received"; + case TWSI_STAT_TXADDR2DATA_ACK: + return "Second address byte + write bit transmitted, ACK received"; + case TWSI_STAT_TXADDR2DATA_NAK: + return "Second address byte + write bit transmitted, NAK received"; + case TWSI_STAT_IDLE: + return "Idle"; + default: + return "Unknown status code"; + } +} + +/** + * Returns true if we lost arbitration + * + * @param code status code + * @param final_read true if this is the final read operation + * + * @return true if arbitration has been lost, false if it hasn't been lost. + */ +static int twsi_i2c_lost_arb(u8 code, int final_read) +{ + switch (code) { + /* Arbitration lost */ + case TWSI_STAT_TX_ARB_LOST: + case TWSI_STAT_TX_ACK_ARB_LOST: + case TWSI_STAT_RX_GEN_ADDR_ARB_LOST: + case TWSI_STAT_RXDATA_ACK_ARB_LOST: + return -EAGAIN; + + /* Being addressed as slave, should back off and listen */ + case TWSI_STAT_SLAVE_RXADDR_ACK: + case TWSI_STAT_RX_GEN_ADDR_ACK: + case TWSI_STAT_GEN_RXADDR_ACK: + case TWSI_STAT_GEN_RXADDR_NAK: + return -EIO; + + /* Core busy as slave */ + case TWSI_STAT_SLAVE_RXDATA_ACK: + case TWSI_STAT_SLAVE_RXDATA_NAK: + case TWSI_STAT_STOP_MULTI_START: + case TWSI_STAT_SLAVE_RXADDR2_ACK: + case TWSI_STAT_SLAVE_TXDATA_ACK: + case TWSI_STAT_SLAVE_TXDATA_NAK: + case TWSI_STAT_SLAVE_TXDATA_END_ACK: + return -EIO; + + /* Ack allowed on pre-terminal bytes only */ + case TWSI_STAT_RXDATA_ACK_SENT: + if (!final_read) + return 0; + return -EAGAIN; + + /* NAK allowed on terminal byte only */ + case TWSI_STAT_RXDATA_NAK_SENT: + if (!final_read) + return 0; + return -EAGAIN; + + case TWSI_STAT_TXDATA_NAK: + case TWSI_STAT_TXADDR_NAK: + case TWSI_STAT_RXADDR_NAK: + case TWSI_STAT_TXADDR2DATA_NAK: + return -EAGAIN; + } + return 0; +} + +/** + * Writes to the MIO_TWS(0..5)_SW_TWSI register + * + * @param baseaddr Base address of i2c registers + * @param sw_twsi value to write + * + * @return 0 for success, otherwise error + */ +static u64 twsi_write_sw(void *baseaddr, union twsx_sw_twsi sw_twsi) +{ + unsigned long start = get_timer(0); + + sw_twsi.s.r = 0; + sw_twsi.s.v = 1; + + debug("%s(%p, 0x%llx)\n", __func__, baseaddr, sw_twsi.u); + writeq(sw_twsi.u, baseaddr + TWSI_SW_TWSI); + do { + sw_twsi.u = readq(baseaddr + TWSI_SW_TWSI); + } while (sw_twsi.s.v != 0 && get_timer(start) < 50); + + if (sw_twsi.s.v) + debug("%s: timed out\n", __func__); + return sw_twsi.u; +} + +/** + * Reads the MIO_TWS(0..5)_SW_TWSI register + * + * @param baseaddr Base address of i2c registers + * @param sw_twsi value for eia and op, etc. to read + * + * @return value of the register + */ +static u64 twsi_read_sw(void *baseaddr, union twsx_sw_twsi sw_twsi) +{ + unsigned long start = get_timer(0); + sw_twsi.s.r = 1; + sw_twsi.s.v = 1; + + debug("%s(%p, 0x%llx)\n", __func__, baseaddr, sw_twsi.u); + writeq(sw_twsi.u, baseaddr + TWSI_SW_TWSI); + + do { + sw_twsi.u = readq(baseaddr + TWSI_SW_TWSI); + } while (sw_twsi.s.v != 0 && get_timer(start) < 50); + + if (sw_twsi.s.v) + debug("%s: Error writing 0x%llx\n", __func__, sw_twsi.u); + + debug("%s: Returning 0x%llx\n", __func__, sw_twsi.u); + return sw_twsi.u; +} + +/** + * Write control register + * + * @param baseaddr Base address for i2c registers + * @param data data to write + */ +static void twsi_write_ctl(void *baseaddr, u8 data) +{ + union twsx_sw_twsi twsi_sw; + + debug("%s(%p, 0x%x)\n", __func__, baseaddr, data); + twsi_sw.u = 0; + + twsi_sw.s.op = TWSI_SW_EOP_IA; + twsi_sw.s.eop_ia = TWSI_CTL; + twsi_sw.s.data = data; + + twsi_write_sw(baseaddr, twsi_sw); +} + +/** + * Reads the TWSI Control Register + * + * @param[in] baseaddr Base address for i2c + * + * @return 8-bit TWSI control register + */ +static u32 twsi_read_ctl(void *baseaddr) +{ + union twsx_sw_twsi sw_twsi; + + sw_twsi.u = 0; + sw_twsi.s.op = TWSI_SW_EOP_IA; + sw_twsi.s.eop_ia = TWSI_CTL; + + sw_twsi.u = twsi_read_sw(baseaddr, sw_twsi); + debug("%s(%p): 0x%x\n", __func__, baseaddr, sw_twsi.s.data); + return sw_twsi.s.data; +} + +/** + * Read i2c status register + * + * @param baseaddr Base address of i2c registers + * + * @return value of status register + */ +static u8 twsi_read_status(void *baseaddr) +{ + union twsx_sw_twsi twsi_sw; + + twsi_sw.u = 0; + twsi_sw.s.op = TWSI_SW_EOP_IA; + twsi_sw.s.eop_ia = TWSI_STAT; + + return twsi_read_sw(baseaddr, twsi_sw); +} + +/** + * Waits for an i2c operation to complete + * + * @param baseaddr Base address of registers + * + * @return 0 for success, 1 if timeout + */ +static int twsi_wait(void *baseaddr) +{ + unsigned long start = get_timer(0); + u8 twsi_ctl; + + debug("%s(%p)\n", __func__, baseaddr); + do { + twsi_ctl = twsi_read_ctl(baseaddr); + twsi_ctl &= TWSI_CTL_IFLG; + } while (!twsi_ctl && get_timer(start) < 50); + + debug(" return: %u\n", !twsi_ctl); + return !twsi_ctl; +} + +/** + * Unsticks the i2c bus + * + * @param baseaddr base address of registers + */ +static int twsi_start_unstick(void *baseaddr) +{ + twsi_stop(baseaddr); + + twsi_unblock(baseaddr); + + return 0; +} + +/** + * Sends an i2c start condition + * + * @param baseaddr base address of registers + * + * @return 0 for success, otherwise error + */ +static int twsi_start(void *baseaddr) +{ + int result; + u8 stat; + + debug("%s(%p)\n", __func__, baseaddr); + twsi_write_ctl(baseaddr, TWSI_CTL_STA | TWSI_CTL_ENAB); + result = twsi_wait(baseaddr); + if (result) { + stat = twsi_read_status(baseaddr); + debug("%s: result: 0x%x, status: 0x%x\n", __func__, + result, stat); + switch (stat) { + case TWSI_STAT_START: + case TWSI_STAT_RSTART: + return 0; + case TWSI_STAT_RXADDR_ACK: + default: + return twsi_start_unstick(baseaddr); + } + } + debug("%s: success\n", __func__); + return 0; +} + +/** + * Sends an i2c stop condition + * + * @param baseaddr register base address + * + * @return 0 for success, -1 if error + */ +static int twsi_stop(void *baseaddr) +{ + u8 stat; + twsi_write_ctl(baseaddr, TWSI_CTL_STP | TWSI_CTL_ENAB); + + stat = twsi_read_status(baseaddr); + if (stat != TWSI_STAT_IDLE) { + debug("%s: Bad status on bus@%p\n", __func__, baseaddr); + return -1; + } + return 0; +} + +/** + * Writes data to the i2c bus + * + * @param baseraddr register base address + * @param slave_addr address of slave to write to + * @param buffer Pointer to buffer to write + * @param length Number of bytes in buffer to write + * + * @return 0 for success, otherwise error + */ +static int twsi_write_data(void *baseaddr, u8 slave_addr, + u8 *buffer, unsigned int length) +{ + union twsx_sw_twsi twsi_sw; + unsigned int curr = 0; + int result; + + debug("%s(%p, 0x%x, %p, 0x%x)\n", __func__, baseaddr, slave_addr, + buffer, length); + result = twsi_start(baseaddr); + if (result) { + debug("%s: Could not start BUS transaction\n", __func__); + return -1; + } + + result = twsi_wait(baseaddr); + if (result) { + debug("%s: wait failed\n", __func__); + return result; + } + + twsi_sw.u = 0; + twsi_sw.s.op = TWSI_SW_EOP_IA; + twsi_sw.s.eop_ia = TWSI_DATA; + twsi_sw.s.data = (u32) (slave_addr << 1) | TWSI_OP_WRITE; + + twsi_write_sw(baseaddr, twsi_sw); + twsi_write_ctl(baseaddr, TWSI_CTL_ENAB); + + debug("%s: Waiting\n", __func__); + result = twsi_wait(baseaddr); + if (result) { + debug("%s: Timed out writing slave address 0x%x to target\n", + __func__, slave_addr); + return result; + } + result = twsi_read_status(baseaddr); + debug("%s: status: (%d) %s\n", __func__, result, + twsi_i2c_status_str(result)); + if ((result = twsi_read_status(baseaddr)) != TWSI_STAT_TXADDR_ACK) { + debug("%s: status: (%d) %s\n", __func__, result, + twsi_i2c_status_str(result)); + twsi_stop(baseaddr); + return twsi_i2c_lost_arb(result, 0); + } + + while (curr < length) { + twsi_sw.u = 0; + twsi_sw.s.op = TWSI_SW_EOP_IA; + twsi_sw.s.eop_ia = TWSI_DATA; + twsi_sw.s.data = buffer[curr++]; + + twsi_write_sw(baseaddr, twsi_sw); + twsi_write_ctl(baseaddr, TWSI_CTL_ENAB); + + debug("%s: Writing 0x%x\n", __func__, twsi_sw.s.data); + + result = twsi_wait(baseaddr); + if (result) { + debug("%s: Timed out writing data to 0x%x\n", + __func__, slave_addr); + return result; + } + result = twsi_read_status(baseaddr); + debug("%s: status: (%d) %s\n", __func__, result, + twsi_i2c_status_str(result)); + } + + debug("%s: Stopping\n", __func__); + return twsi_stop(baseaddr); +} + +/** + * Manually clear the I2C bus and send a stop + */ +static void twsi_unblock(void *baseaddr) +{ + int i; + union twsx_int int_reg; + + int_reg.u = 0; + for (i = 0; i < 9; i++) { + int_reg.s.scl_ovr = 0; + writeq(int_reg.u, baseaddr + TWSI_INT); + udelay(5); + int_reg.s.scl_ovr = 1; + writeq(int_reg.u, baseaddr + TWSI_INT); + udelay(5); + } + int_reg.s.sda_ovr = 1; + writeq(int_reg.u, baseaddr + TWSI_INT); + udelay(5); + int_reg.s.scl_ovr = 0; + writeq(int_reg.u, baseaddr + TWSI_INT); + udelay(5); + int_reg.u = 0; + writeq(int_reg.u, baseaddr + TWSI_INT); + udelay(5); +} + +/** + * Performs a read transaction on the i2c bus + * + * @param baseaddr Base address of twsi registers + * @param slave_addr i2c bus address to read from + * @param buffer buffer to read into + * @param length number of bytes to read + * + * @return 0 for success, otherwise error + */ +static int twsi_read_data(void *baseaddr, u8 slave_addr, + u8 *buffer, unsigned int length) +{ + union twsx_sw_twsi twsi_sw; + unsigned int curr = 0; + int result; + + debug("%s(%p, 0x%x, %p, %u)\n", __func__, baseaddr, slave_addr, + buffer, length); + result = twsi_start(baseaddr); + if (result) { + debug("%s: start failed\n", __func__); + return result; + } + + result = twsi_wait(baseaddr); + if (result) { + debug("%s: wait failed\n", __func__); + return result; + } + + twsi_sw.u = 0; + twsi_sw.s.op = TWSI_SW_EOP_IA; + twsi_sw.s.eop_ia = TWSI_DATA; + + twsi_sw.s.data = (u32) (slave_addr << 1) | TWSI_OP_READ; + + twsi_write_sw(baseaddr, twsi_sw); + twsi_write_ctl(baseaddr, TWSI_CTL_ENAB); + + result = twsi_wait(baseaddr); + if (result) { + debug("%s: waiting for sending addr failed\n", __func__); + return result; + } + + result = twsi_read_status(baseaddr); + debug("%s: status: (%d) %s\n", __func__, result, + twsi_i2c_status_str(result)); + if (result != TWSI_STAT_RXADDR_ACK) { + debug("%s: status: (%d) %s\n", __func__, result, + twsi_i2c_status_str(result)); + twsi_stop(baseaddr); + return twsi_i2c_lost_arb(result, 0); + } + + while (curr < length) { + twsi_write_ctl(baseaddr, TWSI_CTL_ENAB | + ((curr < length - 1) ? TWSI_CTL_AAK : 0)); + + result = twsi_wait(baseaddr); + if (result) { + debug("%s: waiting for data failed\n", __func__); + return result; + } + + twsi_sw.u = twsi_read_sw(baseaddr, twsi_sw); + buffer[curr++] = twsi_sw.s.data; + } + + twsi_stop(baseaddr); + + return 0; +} + +static int twsi_init(void *baseaddr, unsigned int speed, int slaveaddr) +{ + int io_clock_hz; + int n_div; + int m_div; + union twsx_sw_twsi sw_twsi; + + debug("%s(%p, %u, 0x%x)\n", __func__, baseaddr, speed, slaveaddr); + io_clock_hz = thunderx_get_io_clock(); + + /* Set the TWSI clock to a conservative TWSI_BUS_FREQ. Compute the + * clocks M divider based on the SCLK. + * TWSI freq = (core freq) / (20 x (M+1) x (thp+1) x 2^N) + * M = ((core freq) / (20 x (TWSI freq) x (thp+1) x 2^N)) - 1 + */ + for (n_div = 0; n_div < 8; n_div++) { + m_div = io_clock_hz / (20 * speed * (TWSI_THP + 1)); + m_div /= 1 << n_div; + m_div -= 1; + if (m_div < 16) + break; + } + if (m_div >= 16) + return -1; + + sw_twsi.u = 0; + sw_twsi.s.v = 1; /* Clear valid bit */ + sw_twsi.s.op = 0x6; /* See EOP field */ + sw_twsi.s.r = 0; /* Select CLKCTL when R = 0 */ + sw_twsi.s.eop_ia = 3; /* R=0 selects CLKCTL, R=1 selects STAT */ + sw_twsi.s.data = ((m_div & 0xf) << 3) | ((n_div & 0x7) << 0); + + twsi_write_sw(baseaddr, sw_twsi); + /* Only init non-slave ports */ + debug("%s: Writing 0x%llx to sw_twsi, m_div: 0x%x, n_div: 0x%x\n", + __func__, sw_twsi.u, m_div, n_div); + + + sw_twsi.u = 0; + sw_twsi.s.v = 1; + sw_twsi.s.op = TWSI_SW_EOP_IA; + sw_twsi.s.r = 0; + sw_twsi.s.eop_ia = 0; + sw_twsi.s.data = slaveaddr << 1; + + twsi_write_sw(baseaddr, sw_twsi); + + /* Set slave address */ + sw_twsi.u = 0; + sw_twsi.s.v = 1; + sw_twsi.s.op = TWSI_SW_EOP_IA; + sw_twsi.s.r = 0; + sw_twsi.s.eop_ia = TWSI_EOP_SLAVE_ADDR; + sw_twsi.s.data = slaveaddr; + twsi_write_sw(baseaddr, sw_twsi); + + return 0; +} + +/** + * Transfers data over the i2c bus + * + * @param bus i2c bus to transfer data over + * @param msg Array of i2c messages + * @param nmsgs Number of messages to send/receive + * + * @return 0 for success, otherwise error + */ +static int thunderx_i2c_xfer(struct udevice *bus, struct i2c_msg *msg, + int nmsgs) +{ + struct thunderx_twsi *twsi = dev_get_priv(bus); + int result; + + debug("thunderx_i2c_xfer: %d messages\n", nmsgs); + for (; nmsgs > 0; nmsgs--, msg++) { + debug("thunderx_i2c_xfer: chip=0x%x, len=0x%x\n", + msg->addr, msg->len); + if (msg->flags & I2C_M_RD) { + debug("%s: Reading data\n", __func__); + result = twsi_read_data(twsi->baseaddr, msg->addr, + msg->buf, msg->len); + } else { + debug("%s: Writing data\n", __func__); + result = twsi_write_data(twsi->baseaddr, msg->addr, + msg->buf, msg->len); + } + if (result) { + debug("thunderx_i2c_xfer: error sending\n"); + return -EREMOTEIO; + } + } + + return 0; +} + +static int thunderx_i2c_set_bus_speed(struct udevice *bus, unsigned int speed) +{ + struct thunderx_twsi *twsi = dev_get_priv(bus); + int m_div, n_div; + unsigned io_clock_hz; + union twsx_sw_twsi sw_twsi; + void *baseaddr = twsi->baseaddr; + + io_clock_hz = thunderx_get_io_clock(); + debug("%s(%p, %u) io clock: %u\n", __func__, bus, speed, io_clock_hz); + + /* Set the TWSI clock to a conservative TWSI_BUS_FREQ. Compute the + * clocks M divider based on the SCLK. + * TWSI freq = (core freq) / (20 x (M+1) x (thp+1) x 2^N) + * M = ((core freq) / (20 x (TWSI freq) x (thp+1) x 2^N)) - 1 */ + for (n_div = 0; n_div < 8; n_div++) { + m_div = io_clock_hz / (20 * speed * (TWSI_THP + 1)); + m_div /= 1 << n_div; + m_div -= 1; + if (m_div < 16) + break; + } + if (m_div >= 16) + return -1; + + sw_twsi.u = 0; + sw_twsi.s.v = 1; /* Clear valid bit */ + sw_twsi.s.op = TWSI_SW_EOP_IA; /* See EOP field */ + sw_twsi.s.r = 0; /* Select CLKCTL when R = 0 */ + sw_twsi.s.eop_ia = TWSI_CLKCTL; /* R=0 selects CLKCTL, R=1 selects STAT */ + sw_twsi.s.data = ((m_div & 0xf) << 3) | ((n_div & 0x7) << 0); + + /* Only init non-slave ports */ + writeq(sw_twsi.u, baseaddr + TWSI_SW_TWSI); + + debug("%s: Wrote 0x%llx to sw_twsi\n", __func__, sw_twsi.u); + return 0; +} + +static int thunderx_pci_i2c_probe(struct udevice *dev) +{ + struct thunderx_twsi *twsi = dev_get_priv(dev); + size_t size; + pci_dev_t bdf = dm_pci_get_bdf(dev); + + debug("TWSI PCI device: %x\n", bdf); + dev->req_seq = PCI_FUNC(bdf); + + twsi->baseaddr = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + twsi->id = last_id++; + + debug("TWSI bus %d at %p\n",dev->seq, twsi->baseaddr); + + return twsi_init(twsi->baseaddr, CONFIG_SYS_I2C_SPEED, + CONFIG_SYS_I2C_THUNDERX_SLAVE_ADDR); +} + +static const struct dm_i2c_ops thunderx_i2c_ops = { + .xfer = thunderx_i2c_xfer, + .set_bus_speed = thunderx_i2c_set_bus_speed, +}; + +static const struct udevice_id thunderx_i2c_ids[] = { + { .compatible = "cavium,thunder-8890-twsi" }, + { } +}; + +U_BOOT_DRIVER(thunderx_pci_twsi) = { + .name = "i2c_thunderx", + .id = UCLASS_I2C, + .of_match = thunderx_i2c_ids, + .probe = thunderx_pci_i2c_probe, + .priv_auto_alloc_size = sizeof(struct thunderx_twsi), + .ops = &thunderx_i2c_ops, +}; + +static struct pci_device_id thunderx_pci_twsi_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_TWSI) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_pci_twsi, thunderx_pci_twsi_supported); diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h index 10c4a89232..7a2c0adde2 100644 --- a/include/configs/thunderx_81xx.h +++ b/include/configs/thunderx_81xx.h @@ -68,4 +68,6 @@ #define PLL_REF_CLK 50000000 /* 50 MHz */ #define NS_PER_REF_CLK_TICK (1000000000/PLL_REF_CLK)
+#define CONFIG_SYS_I2C_SPEED 100000 + #endif /* __THUNDERX_81XX_H__ */

Signed-off-by: Tim Harvey tharvey@gateworks.com --- configs/thunderx_81xx_defconfig | 4 + drivers/spi/Kconfig | 6 + drivers/spi/Makefile | 1 + drivers/spi/thunderx_spi.c | 448 ++++++++++++++++++++++++++++++++ 4 files changed, 459 insertions(+) create mode 100755 drivers/spi/thunderx_spi.c
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index e43aa9750d..48f57ecf1b 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -24,6 +24,7 @@ CONFIG_SYS_PROMPT="ThunderX_81XX> " CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_CMD_PCI=y +CONFIG_CMD_SPI=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y @@ -39,3 +40,6 @@ CONFIG_PCI_THUNDERX=y CONFIG_DM_SERIAL=y CONFIG_DEBUG_UART_PL011=y CONFIG_DEBUG_UART_SKIP_INIT=y +CONFIG_SPI=y +CONFIG_DM_SPI=y +CONFIG_THUNDERX_SPI=y diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig index 516188ea88..d1d5463909 100644 --- a/drivers/spi/Kconfig +++ b/drivers/spi/Kconfig @@ -233,6 +233,12 @@ config TEGRA210_QSPI be used to access SPI chips on platforms embedding this NVIDIA Tegra210 IP core.
+config THUNDERX_SPI + bool "Cavium ThunderX SPI driver" + help + Enable the Cavium ThunderX SPI driver. This driver can be used to + access the SPI NOR flash on ThunderX SoC platforms. + config XILINX_SPI bool "Xilinx SPI driver" help diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile index 7242ea7e40..ea99775094 100644 --- a/drivers/spi/Makefile +++ b/drivers/spi/Makefile @@ -52,6 +52,7 @@ obj-$(CONFIG_TEGRA114_SPI) += tegra114_spi.o obj-$(CONFIG_TEGRA20_SFLASH) += tegra20_sflash.o obj-$(CONFIG_TEGRA20_SLINK) += tegra20_slink.o obj-$(CONFIG_TEGRA210_QSPI) += tegra210_qspi.o +obj-$(CONFIG_THUNDERX_SPI) += thunderx_spi.o obj-$(CONFIG_TI_QSPI) += ti_qspi.o obj-$(CONFIG_XILINX_SPI) += xilinx_spi.o obj-$(CONFIG_ZYNQ_SPI) += zynq_spi.o diff --git a/drivers/spi/thunderx_spi.c b/drivers/spi/thunderx_spi.c new file mode 100755 index 0000000000..e165215524 --- /dev/null +++ b/drivers/spi/thunderx_spi.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018, Cavium Inc. + */ + +#include <common.h> +#include <dm.h> +#include <malloc.h> +#include <spi.h> +#include <watchdog.h> + +#include <asm/io.h> +#include <asm/arch-thunderx/thunderx.h> +#include <asm/unaligned.h> + +#define THUNDERX_SPI_MAX_BYTES 9 +#define THUNDERX_SPI_MAX_CLOCK_HZ 50000000 + +#define THUNDERX_SPI_NUM_CS 4 + +#define THUNDERX_SPI_CS_VALID(cs) ((cs) < THUNDERX_SPI_NUM_CS) + +#define MPI_CFG 0x1000 +#define MPI_STS 0x1008 +#define MPI_TX 0x1010 +#define MPI_WIDE_DAT 0x1040 +#define MPI_DAT(X) (0x1080 + ((X) << 3)) + +union mpi_cfg { + uint64_t u; + struct mpi_cfg_s { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + uint64_t :35; + + uint64_t clkdiv :13; /** clock divisor */ + uint64_t csena3 :1; /** cs enable 3. */ + uint64_t csena2 :1; /** cs enable 2 */ + uint64_t csena1 :1; /** cs enable 1 */ + uint64_t csena0 :1; /** cs enable 0 */ + /** + * 0 = SPI_CSn asserts 1/2 coprocessor-clock cycle before + * transaction + * 1 = SPI_CSn asserts coincident with transaction + */ + uint64_t cslate :1; + /** + * Tristate TX. Set to 1 to tristate SPI_DO when not + * transmitting. + */ + uint64_t tritx :1; + /** + * When set, guarantees idle coprocessor-clock cycles between + * commands. + */ + uint64_t idleclks :2; + /** + * SPI_CSn_L high. 1 = SPI_CSn_L is asserted high, + * 0 = SPI_CS_n asserted low. + */ + uint64_t cshi :1; + uint64_t :2; /** Reserved */ + /** 0 = shift MSB first, 1 = shift LSB first */ + uint64_t lsbfirst :1; + /** + * Wire-or DO and DI. + * 0 = SPI_DO and SPI_DI are separate wires (SPI). SPI_DO pin + * is always driven. + * 1 = SPI_DO/DI is all from SPI_DO pin (MPI). SPI_DO pin is + * tristated when not transmitting. If WIREOR = 1, SPI_DI + * pin is not used by the MPI/SPI engine. + */ + uint64_t wireor :1; + /** + * Clock control. + * 0 = Clock idles to value given by IDLELO after completion of + * MPI/SPI transaction. + * 1 = Clock never idles, requires SPI_CSn_L + * deassertion/assertion between commands. + */ + uint64_t clk_cont :1; + /** + * Clock idle low/clock invert + * 0 = SPI_CLK idles high, first transition is high-to-low. + * This correspondes to SPI Block Guide options CPOL = 1, + * CPHA = 0. + * 1 = SPI_CLK idles low, first transition is low-to-high. This + * corresponds to SPI Block Guide options CPOL = 0, CPHA = 0. + */ + uint64_t idlelo :1; + /** MPI/SPI enable, 0 = pins are tristated, 1 = pins driven */ + uint64_t enable :1; +#else /* Word 0 - Little Endian */ + uint64_t enable :1; + uint64_t idlelo :1; + uint64_t clk_cont :1; + uint64_t wireor :1; + uint64_t lsbfirst :1; + uint64_t :2; + uint64_t cshi :1; + uint64_t idleclks :2; + uint64_t tritx :1; + uint64_t cslate :1; + uint64_t csena0 :1; + uint64_t csena1 :1; + uint64_t csena2 :1; + uint64_t csena3 :1; + uint64_t clkdiv :13; + uint64_t :35; /** Reserved */ +#endif /* Word 0 - End */ + } s; + /* struct mpi_cfg_s cn; */ +}; + +/** + * Register (NCB) mpi_dat# + * + * MPI/SPI Data Registers + */ +union mpi_dat { + uint64_t u; + struct mpi_datx_s { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + uint64_t reserved_8_63 :56; + /**< [ 7: 0](R/W/H) Data to transmit/receive. */ + uint64_t data :8; +#else /* Word 0 - Little Endian */ + uint64_t data :8; + uint64_t reserved_8_63 :56; +#endif /* Word 0 - End */ + } s; + /* struct mpi_datx_s cn; */ +}; + +/** + * Register (NCB) mpi_sts + * + * MPI/SPI STS Register + */ +union mpi_sts { + uint64_t u; + struct mpi_sts_s { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + uint64_t reserved_13_63 :51; + uint64_t rxnum :5; /** Number of bytes */ + uint64_t reserved_2_7 :6; + uint64_t mpi_intr :1; /** Transaction done int */ + uint64_t busy :1; /** SPI engine busy */ +#else /* Word 0 - Little Endian */ + uint64_t busy :1; + uint64_t mpi_intr :1; + uint64_t reserved_2_7 :6; + uint64_t rxnum :5; + uint64_t reserved_13_63 :51; +#endif /* Word 0 - End */ + } s; + /* struct mpi_sts_s cn; */ +}; + +/** + * Register (NCB) mpi_tx + * + * MPI/SPI Transmit Register + */ +union mpi_tx { + uint64_t u; + struct mpi_tx_s { +#if __BYTE_ORDER == __BIG_ENDIAN /* Word 0 - Big Endian */ + uint64_t :42; /* Reserved */ + uint64_t csid :2; /** Which CS to assert */ + uint64_t :3; /* Reserved */ + uint64_t leavecs :1; /** Leave CSn asserted */ + uint64_t :3; /* Reserved */ + uint64_t txnum :5; /** Number of words to tx */ + uint64_t :3; /* Reserved */ + uint64_t totnum :5; /** Total bytes to shift */ +#else /* Word 0 - Little Endian */ + uint64_t totnum :5; + uint64_t :3; + uint64_t txnum :5; + uint64_t :3; + uint64_t leavecs :1; + uint64_t :3; + uint64_t csid :2; + uint64_t :42; +#endif /* Word 0 - End */ + } s; + /* struct mpi_tx_s cn; */ +}; + +/** Local driver data structure */ +struct thunderx_spi { + void *baseaddr; /** Register base address */ + u32 clkdiv; /** Clock divisor for device speed */ +}; + +void *thunderx_spi_get_baseaddr(struct udevice *dev) +{ + struct udevice *bus = dev_get_parent(dev); + struct thunderx_spi *priv = dev_get_priv(bus); + + return priv->baseaddr; +} + +static union mpi_cfg thunderx_spi_set_mpicfg(struct udevice *dev) +{ + struct dm_spi_slave_platdata *slave = dev_get_parent_platdata(dev); + struct udevice *bus = dev_get_parent(dev); + struct thunderx_spi *priv = dev_get_priv(bus); + union mpi_cfg mpi_cfg; + uint max_speed = slave->max_hz; + bool cpha, cpol; + + if (!max_speed) + max_speed = 12500000; + if (max_speed > THUNDERX_SPI_MAX_CLOCK_HZ) + max_speed = THUNDERX_SPI_MAX_CLOCK_HZ; + + dev_dbg(dev, "%s: CS%d Hz=%d Mode=%x\n", __func__, + slave->cs, slave->max_hz, slave->mode); + cpha = !!(slave->mode & SPI_CPHA); + cpol = !!(slave->mode & SPI_CPOL); + + mpi_cfg.u = 0; + mpi_cfg.s.clkdiv = priv->clkdiv & 0x1fff; + mpi_cfg.s.cshi = !!(slave->mode & SPI_CS_HIGH); + mpi_cfg.s.lsbfirst = !!(slave->mode & SPI_LSB_FIRST); + mpi_cfg.s.wireor = !!(slave->mode & SPI_3WIRE); + mpi_cfg.s.idlelo = cpha != cpol; + mpi_cfg.s.cslate = cpha; + mpi_cfg.s.enable = 1; + mpi_cfg.s.csena0 = 1; + mpi_cfg.s.csena1 = 1; + mpi_cfg.s.csena2 = 1; + mpi_cfg.s.csena3 = 1; + dev_dbg(dev, "%s: mpi_cfg=%llx\n", __func__, mpi_cfg.u); + + return mpi_cfg; +} + +/** + * Wait until the SPI bus is ready + * + * @param dev SPI device to wait for + */ +static void thunderx_spi_wait_ready(struct udevice *dev) +{ + void *baseaddr = thunderx_spi_get_baseaddr(dev); + union mpi_sts mpi_sts; + + do { + mpi_sts.u = readq(baseaddr + MPI_STS); + WATCHDOG_RESET(); + } while (mpi_sts.s.busy); +} +/** + * Claim the bus for a slave device + * + * @param dev SPI bus + * + * @return 0 for success, -EINVAL if chip select is invalid + */ +static int thunderx_spi_claim_bus(struct udevice *dev) +{ + void *baseaddr = thunderx_spi_get_baseaddr(dev); + union mpi_cfg mpi_cfg; + + if (!THUNDERX_SPI_CS_VALID(spi_chip_select(dev))) + return -EINVAL; + + mpi_cfg.u = readq(baseaddr + MPI_CFG); + mpi_cfg.s.tritx = 0; + mpi_cfg.s.enable = 1; + writeq(mpi_cfg.u, baseaddr + MPI_CFG); + + return 0; +} + +/** + * Release the bus to a slave device + * + * @param dev SPI bus + * + * @return 0 for success, -EINVAL if chip select is invalid + */ +static int thunderx_spi_release_bus(struct udevice *dev) +{ + void *baseaddr = thunderx_spi_get_baseaddr(dev); + union mpi_cfg mpi_cfg; + + if (!THUNDERX_SPI_CS_VALID(spi_chip_select(dev))) + return -EINVAL; + + mpi_cfg.u = readq(baseaddr + MPI_CFG); + mpi_cfg.s.enable = 0; + writeq(mpi_cfg.u, baseaddr + MPI_CFG); + + return 0; +} + +static int thunderx_spi_xfer(struct udevice *dev, unsigned int bitlen, + const void *dout, void *din, unsigned long flags) +{ + void *baseaddr = thunderx_spi_get_baseaddr(dev); + union mpi_tx mpi_tx; + union mpi_cfg mpi_cfg; + uint64_t wide_dat = 0; + int len = bitlen / 8; + int i; + const uint8_t *tx_data = dout; + uint8_t *rx_data = din; + int cs = spi_chip_select(dev); + + if (!THUNDERX_SPI_CS_VALID(cs)) + return -EINVAL; + + dev_dbg(dev, "%s bitlen=%u dout=%p din=%p flags=0x%lx CS%d\n", __func__, + bitlen, dout, din, flags, cs); + + mpi_cfg = thunderx_spi_set_mpicfg(dev); + + if (mpi_cfg.u != readq(baseaddr + MPI_CFG)) + writeq(mpi_cfg.u, baseaddr + MPI_CFG); + + /* Start by writing and reading 8 bytes at a time. While we can support + * up to 10, it's easier to just use 8 with the MPI_WIDE_DAT register. + */ + while (len > 8) { + if (tx_data) { + wide_dat = get_unaligned((uint64_t *)tx_data); + debug(" tx: %016llx\n", (unsigned long long)wide_dat); + tx_data += 8; + writeq(wide_dat, baseaddr + MPI_WIDE_DAT); + } + mpi_tx.u = 0; + mpi_tx.s.csid = cs; + mpi_tx.s.leavecs = 1; + mpi_tx.s.txnum = tx_data ? 8 : 0; + mpi_tx.s.totnum = 8; + writeq(mpi_tx.u, baseaddr + MPI_TX); + + thunderx_spi_wait_ready(dev); + + if (rx_data) { + wide_dat = readq(baseaddr + MPI_WIDE_DAT); + debug(" rx: %016llx\n", (unsigned long long)wide_dat); + *(uint64_t *)rx_data = wide_dat; + rx_data += 8; + } + len -= 8; + } + + /* Write and read the rest of the data */ + if (tx_data) + for (i = 0; i < len; i++) { + debug(" tx: %02x\n", *tx_data); + writeq(*tx_data++, baseaddr + MPI_DAT(i)); + } + + mpi_tx.u = 0; + mpi_tx.s.csid = cs; + mpi_tx.s.leavecs = !(flags & SPI_XFER_END); + mpi_tx.s.txnum = tx_data ? len : 0; + mpi_tx.s.totnum = len; + + writeq(mpi_tx.u, baseaddr + MPI_TX); + + thunderx_spi_wait_ready(dev); + + if (rx_data) { + for (i = 0; i < len; i++) { + *rx_data = readq(baseaddr + MPI_DAT(i)) & 0xff; + debug(" rx: %02x\n", *rx_data); + rx_data++; + } + } + + return 0; +} + +/** + * Set the speed of the SPI bus + * + * @param bus bus to set + * @param max_hz maximum speed supported + */ +static int thunderx_spi_set_speed(struct udevice *bus, uint max_hz) +{ + struct thunderx_spi *priv = dev_get_priv(bus); + + dev_dbg(dev, "%s: max_hz=%u io=%llu", __func__, max_hz, + thunderx_get_io_clock()); + if (max_hz > THUNDERX_SPI_MAX_CLOCK_HZ) + max_hz = THUNDERX_SPI_MAX_CLOCK_HZ; + priv->clkdiv = (thunderx_get_io_clock()) / (2 * max_hz); + + return 0; +} + +static int thunderx_spi_set_mode(struct udevice *bus, uint mode) +{ + /* We don't set it here */ + return 0; +} + +static int thunderx_pci_spi_probe(struct udevice *dev) +{ + struct thunderx_spi *priv = dev_get_priv(dev); + pci_dev_t bdf = dm_pci_get_bdf(dev); + size_t size; + + dev->req_seq = PCI_FUNC(bdf); + priv->baseaddr = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + dev_dbg(dev, "%s: SPI PCI device: bdf:%x base:%p\n", __func__, + bdf, priv->baseaddr); + + return 0; +} + +static const struct dm_spi_ops thunderx_spi_ops = { + .claim_bus = thunderx_spi_claim_bus, + .release_bus = thunderx_spi_release_bus, + .xfer = thunderx_spi_xfer, + .set_speed = thunderx_spi_set_speed, + .set_mode = thunderx_spi_set_mode, +}; + +static const struct udevice_id thunderx_spi_ids[] = { + { .compatible = "cavium,thunder-8890-spi" }, + { .compatible = "cavium,thunder-8190-spi" }, + { } +}; + +U_BOOT_DRIVER(thunderx_pci_spi) = { + .name = "spi_thunderx", + .id = UCLASS_SPI, + .of_match = thunderx_spi_ids, + .probe = thunderx_pci_spi_probe, + .priv_auto_alloc_size = sizeof(struct thunderx_spi), + .ops = &thunderx_spi_ops, +}; + +static struct pci_device_id thunderx_pci_spi_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_SPI) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_pci_spi, thunderx_pci_spi_supported); +

Signed-off-by: Tim Harvey tharvey@gateworks.com --- drivers/usb/host/xhci-pci.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index d42f06bc32..3605357d33 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -18,9 +18,16 @@ static void xhci_pci_init(struct udevice *dev, struct xhci_hccr **ret_hccr, struct xhci_hcor *hcor; size_t size; u32 cmd; + u16 vendor, device; + int bar = PCI_BASE_ADDRESS_0;
+ dm_pci_read_config16(dev, PCI_VENDOR_ID, &vendor); + dm_pci_read_config16(dev, PCI_DEVICE_ID, &device); + if ((vendor == PCI_VENDOR_ID_CAVIUM) && + (device == PCI_DEVICE_ID_THUNDERX_XHCI)) + bar = 0; hccr = (struct xhci_hccr *)dm_pci_map_bar(dev, - PCI_BASE_ADDRESS_0, &size, PCI_REGION_MEM); + bar, &size, PCI_REGION_MEM); hcor = (struct xhci_hcor *)((uintptr_t) hccr + HC_LENGTH(xhci_readl(&hccr->cr_capbase)));

Signed-off-by: Tim Harvey tharvey@gateworks.com --- configs/thunderx_81xx_defconfig | 5 +++++ include/configs/thunderx_81xx.h | 3 ++- 2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index 48f57ecf1b..8538f63181 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -25,6 +25,7 @@ CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_CMD_PCI=y CONFIG_CMD_SPI=y +CONFIG_CMD_USB=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y @@ -43,3 +44,7 @@ CONFIG_DEBUG_UART_SKIP_INIT=y CONFIG_SPI=y CONFIG_DM_SPI=y CONFIG_THUNDERX_SPI=y +CONFIG_USB=y +CONFIG_DM_USB=y +CONFIG_USB_XHCI_HCD=y +CONFIG_USB_XHCI_PCI=y diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h index 7a2c0adde2..dbca775699 100644 --- a/include/configs/thunderx_81xx.h +++ b/include/configs/thunderx_81xx.h @@ -55,7 +55,8 @@ "ramdisk_addr_r=0x03300000\0" \ BOOTENV
-#define BOOT_TARGET_DEVICES(func) +#define BOOT_TARGET_DEVICES(func) \ + func(USB, usb, 0)
#include <config_distro_bootcmd.h>

--- configs/thunderx_81xx_defconfig | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index 8538f63181..bb374b741a 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -48,3 +48,6 @@ CONFIG_USB=y CONFIG_DM_USB=y CONFIG_USB_XHCI_HCD=y CONFIG_USB_XHCI_PCI=y +CONFIG_USB_STORAGE=y +CONFIG_USB_HOST_ETHER=y +CONFIG_USB_ETHER_ASIX=y

Signed-off-by: Tim Harvey tharvey@gateworks.com --- drivers/ata/ahci.c | 26 +++++++++++++++++--------- include/ahci.h | 3 +++ 2 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index d6753f140d..81d16925dd 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -26,6 +26,9 @@ #include <dm/device-internal.h> #include <dm/lists.h>
+#define LOWER32(val) (u32)((u64)(val) & 0xffffffff) +#define UPPER32(val) (u32)(((u64)(val) & 0xffffffff00000000) >> 32) + static int ata_io_flush(struct ahci_uc_priv *uc_priv, u8 port);
#ifndef CONFIG_DM_SCSI @@ -517,8 +520,9 @@ static int ahci_fill_sg(struct ahci_uc_priv *uc_priv, u8 port,
for (i = 0; i < sg_count; i++) { ahci_sg->addr = - cpu_to_le32((unsigned long) buf + i * MAX_DATA_BYTE_COUNT); - ahci_sg->addr_hi = 0; + cpu_to_le32(LOWER32(buf + i * MAX_DATA_BYTE_COUNT)); + ahci_sg->addr_hi = + cpu_to_le32(UPPER32(buf + i * MAX_DATA_BYTE_COUNT)); ahci_sg->flags_size = cpu_to_le32(0x3fffff & (buf_len < MAX_DATA_BYTE_COUNT ? (buf_len - 1) @@ -535,11 +539,8 @@ static void ahci_fill_cmd_slot(struct ahci_ioports *pp, u32 opts) { pp->cmd_slot->opts = cpu_to_le32(opts); pp->cmd_slot->status = 0; - pp->cmd_slot->tbl_addr = cpu_to_le32((u32)pp->cmd_tbl & 0xffffffff); -#ifdef CONFIG_PHYS_64BIT - pp->cmd_slot->tbl_addr_hi = - cpu_to_le32((u32)(((pp->cmd_tbl) >> 16) >> 16)); -#endif + pp->cmd_slot->tbl_addr = cpu_to_le32(LOWER32(pp->cmd_tbl)); + pp->cmd_slot->tbl_addr_hi = cpu_to_le32(UPPER32(pp->cmd_tbl)); }
static int wait_spinup(void __iomem *port_mmio) @@ -609,10 +610,17 @@ static int ahci_port_start(struct ahci_uc_priv *uc_priv, u8 port) pp->cmd_tbl_sg = (struct ahci_sg *)(uintptr_t)virt_to_phys((void *)mem);
- writel_with_flush((unsigned long)pp->cmd_slot, + if (uc_priv->cap & HOST_CAP_64) + writel_with_flush(cpu_to_le32(UPPER32(pp->cmd_slot)), + port_mmio + PORT_LST_ADDR_HI); + writel_with_flush(cpu_to_le32(LOWER32(pp->cmd_slot)), port_mmio + PORT_LST_ADDR);
- writel_with_flush(pp->rx_fis, port_mmio + PORT_FIS_ADDR); + if (uc_priv->cap & HOST_CAP_64) + writel_with_flush(cpu_to_le32(UPPER32(pp->rx_fis)), + port_mmio + PORT_FIS_ADDR_HI); + writel_with_flush(cpu_to_le32(LOWER32(pp->rx_fis)), + port_mmio + PORT_FIS_ADDR);
#ifdef CONFIG_SUNXI_AHCI sunxi_dma_init(port_mmio); diff --git a/include/ahci.h b/include/ahci.h index b42df6c77e..6e439c184c 100644 --- a/include/ahci.h +++ b/include/ahci.h @@ -40,6 +40,9 @@ #define HOST_IRQ_EN (1 << 1) /* global IRQ enable */ #define HOST_AHCI_EN (1 << 31) /* AHCI enabled */
+/* HOST_CAP bits */ +#define HOST_CAP_64 (1 << 31) /* 64-bit addressing supported */ + /* Registers for each SATA port */ #define PORT_LST_ADDR 0x00 /* command list DMA addr */ #define PORT_LST_ADDR_HI 0x04 /* command list DMA addr hi */

Instead of assuming 2 devices, get the max from the host controller.
Signed-off-by: Tim Harvey tharvey@gateworks.com --- drivers/ata/ahci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index 81d16925dd..e7d590f69a 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -1181,12 +1181,12 @@ int ahci_probe_scsi(struct udevice *ahci_dev, ulong base) uc_plat = dev_get_uclass_platdata(dev); uc_plat->base = base; uc_plat->max_lun = 1; - uc_plat->max_id = 2;
uc_priv = dev_get_uclass_priv(ahci_dev); ret = ahci_init_one(uc_priv, dev); if (ret) return ret; + uc_plat->max_id = uc_priv->n_ports; ret = ahci_start_ports(uc_priv); if (ret) return ret;

Signed-off-by: Tim Harvey tharvey@gateworks.com --- configs/thunderx_81xx_defconfig | 7 +++++++ drivers/ata/ahci.c | 12 +++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index bb374b741a..dbdadde5fc 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -6,6 +6,7 @@ CONFIG_DEBUG_UART_CLOCK=24000000 CONFIG_IDENT_STRING=" for Cavium Thunder CN81XX ARM v8 Multi-Core" CONFIG_THUNDERX_81XX=y CONFIG_DEBUG_UART=y +CONFIG_AHCI=y CONFIG_NR_DRAM_BANKS=1 CONFIG_BOOTDELAY=5 CONFIG_USE_BOOTARGS=y @@ -29,6 +30,10 @@ CONFIG_CMD_USB=y # CONFIG_CMD_NET is not set CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y +CONFIG_SATA=y +CONFIG_SCSI_AHCI=y +CONFIG_AHCI_PCI=y +CONFIG_BLK=y CONFIG_DM_GPIO=y CONFIG_DM_I2C=y CONFIG_LED=y @@ -38,6 +43,8 @@ CONFIG_PCI=y CONFIG_DM_PCI=y CONFIG_DM_PCI_COMPAT=y CONFIG_PCI_THUNDERX=y +CONFIG_SCSI=y +CONFIG_DM_SCSI=y CONFIG_DM_SERIAL=y CONFIG_DEBUG_UART_PL011=y CONFIG_DEBUG_UART_SKIP_INIT=y diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index e7d590f69a..efbd509807 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -1199,9 +1199,15 @@ int ahci_probe_scsi_pci(struct udevice *ahci_dev) { ulong base; size_t size; - - base = (ulong)dm_pci_map_bar(ahci_dev, PCI_BASE_ADDRESS_5, &size, - PCI_REGION_MEM); + u16 vendor, device; + int bar = PCI_BASE_ADDRESS_5; + + dm_pci_read_config16(ahci_dev, PCI_VENDOR_ID, &vendor); + dm_pci_read_config16(ahci_dev, PCI_DEVICE_ID, &device); + if ((vendor == PCI_VENDOR_ID_CAVIUM) && + (device == PCI_DEVICE_ID_THUNDERX_AHCI)) + bar = 0; + base = (ulong)dm_pci_map_bar(ahci_dev, bar, &size, PCI_REGION_MEM);
return ahci_probe_scsi(ahci_dev, base); }

Signed-off-by: Tim Harvey tharvey@gateworks.com --- include/configs/thunderx_81xx.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h index dbca775699..c32aa7844c 100644 --- a/include/configs/thunderx_81xx.h +++ b/include/configs/thunderx_81xx.h @@ -56,7 +56,8 @@ BOOTENV
#define BOOT_TARGET_DEVICES(func) \ - func(USB, usb, 0) + func(USB, usb, 0) \ + func(SCSI, scsi, 0)
#include <config_distro_bootcmd.h>

Note we move eth init to early init and disable it in later legacy in order to make sure its initialized in time for the pci NIC.
Signed-off-by: Tim Harvey tharvey@gateworks.com --- arch/arm/include/asm/arch-thunderx/fdt.h | 12 + .../include/asm/arch-thunderx/thunderx_vnic.h | 15 + arch/arm/mach-thunderx/fdt.c | 170 +- board/cavium/thunderx/thunderx.c | 30 +- configs/thunderx_81xx_defconfig | 7 +- drivers/net/Kconfig | 34 + drivers/net/Makefile | 1 + drivers/net/cavium/Makefile | 8 + drivers/net/cavium/nic.h | 569 ++++++ drivers/net/cavium/nic_main.c | 795 +++++++++ drivers/net/cavium/nic_reg.h | 228 +++ drivers/net/cavium/nicvf_main.c | 553 ++++++ drivers/net/cavium/nicvf_queues.c | 1123 ++++++++++++ drivers/net/cavium/nicvf_queues.h | 364 ++++ drivers/net/cavium/q_struct.h | 692 ++++++++ drivers/net/cavium/thunder_bgx.c | 1529 +++++++++++++++++ drivers/net/cavium/thunder_bgx.h | 259 +++ drivers/net/cavium/thunder_xcv.c | 190 ++ drivers/net/cavium/thunderx_smi.c | 388 +++++ include/configs/thunderx_81xx.h | 11 +- net/eth_legacy.c | 3 + 21 files changed, 6973 insertions(+), 8 deletions(-) create mode 100644 arch/arm/include/asm/arch-thunderx/fdt.h create mode 100644 arch/arm/include/asm/arch-thunderx/thunderx_vnic.h create mode 100644 drivers/net/cavium/Makefile create mode 100644 drivers/net/cavium/nic.h create mode 100644 drivers/net/cavium/nic_main.c create mode 100644 drivers/net/cavium/nic_reg.h create mode 100644 drivers/net/cavium/nicvf_main.c create mode 100644 drivers/net/cavium/nicvf_queues.c create mode 100644 drivers/net/cavium/nicvf_queues.h create mode 100644 drivers/net/cavium/q_struct.h create mode 100644 drivers/net/cavium/thunder_bgx.c create mode 100644 drivers/net/cavium/thunder_bgx.h create mode 100644 drivers/net/cavium/thunder_xcv.c create mode 100644 drivers/net/cavium/thunderx_smi.c
diff --git a/arch/arm/include/asm/arch-thunderx/fdt.h b/arch/arm/include/asm/arch-thunderx/fdt.h new file mode 100644 index 0000000000..7f61e7694e --- /dev/null +++ b/arch/arm/include/asm/arch-thunderx/fdt.h @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2018 (C) Cavium Inc. + */ +#ifndef __THUNDERX_FDT_H__ +#define __THUNDERX_FDT_H__ + +void thunderx_parse_phy_info(void); +void thunderx_parse_mac_addr(void); + +#endif + diff --git a/arch/arm/include/asm/arch-thunderx/thunderx_vnic.h b/arch/arm/include/asm/arch-thunderx/thunderx_vnic.h new file mode 100644 index 0000000000..f1e372c61e --- /dev/null +++ b/arch/arm/include/asm/arch-thunderx/thunderx_vnic.h @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2018 (C) Cavium Inc. + */ +#ifndef VNIC_H_ +#define VNIC_H_ + +int nicvf_initialize(struct udevice *dev, int vf_num); +int thunderx_bgx_initialize(unsigned int bgx_idx, unsigned int node); +void bgx_get_count(int node, int *bgx_count); +int bgx_get_lmac_count(int node, int bgx_idx); +void bgx_set_board_info(int bgx_id, int *mdio_bus, int *phy_addr, + bool *autoneg_dis, bool *lmac_reg, bool *lmac_enable); + +#endif /* VNIC_H_ */ diff --git a/arch/arm/mach-thunderx/fdt.c b/arch/arm/mach-thunderx/fdt.c index 31f1128e9f..c03b0ec939 100644 --- a/arch/arm/mach-thunderx/fdt.c +++ b/arch/arm/mach-thunderx/fdt.c @@ -12,6 +12,175 @@ /* From lowlevel_init.S */ extern unsigned long fdt_base_addr;
+#if defined(CONFIG_THUNDERX_VNIC) +#include <asm/arch-thunderx/thunderx_vnic.h> + +#define MAX_LMAC_PER_BGX 4 + +DECLARE_GLOBAL_DATA_PTR; + +/** + * Parse PHY info from device-tree and pass along via bgx_set_board_info() + * + * The Cavium ThunderX device-trees RGX/BGX nodes are child nodes of the + * mrml_bridge node alongside the mdio-nexus MDIO node. + * + * There can be one RGMII MAC (rgx0) and up to MAX_LMAC_PER_BGX BGX MACs + * (bgx<0..MAX_LMAC_PER_BGX-1>). + * + * Each MAC can contain multiple logical MAC's (LMAC), for example a QSGMII + * PHY has 4 LMACs. These LMAC's are children of the bgx/rgx node and can + * contain the following properties: + * reg (required) - LMAC instance + * qlm-mode (optional) - CNX81XX QLM phy mode (ie qsgmii, rgmii) + * phy-handle (required) - points to PHY node + * local-mac-address (optional) - MAC address which can be filled in by + * firmware. + * + * These details will be parsed from the device-tree and passed into the + * VNIC driver bgx_set_board_info() to configure it. + */ +static int thunderx_parse_phy(int bgx_id) +{ + const void *fdt = gd->fdt_blob; + int offset, node, lmacid = 0; + int phy_offset, phandle; + int bdknode; + const u32 *val; + const char *str; + char name[24]; + int len, eth_id = 0; + int subnode, i; + int addr[MAX_LMAC_PER_BGX] = { [0 ... MAX_LMAC_PER_BGX - 1] = -1}; + int bus[MAX_LMAC_PER_BGX] = { [0 ... MAX_LMAC_PER_BGX - 1] = -1}; + bool aneg_dis[MAX_LMAC_PER_BGX] = { [0 ... MAX_LMAC_PER_BGX - 1] = 0}; + bool lmac_reg[MAX_LMAC_PER_BGX] = { [0 ... MAX_LMAC_PER_BGX - 1] = 0}; + bool lmac_en[MAX_LMAC_PER_BGX] = { [0 ... MAX_LMAC_PER_BGX - 1] = 0}; + + bdknode = fdt_path_offset(fdt, "/cavium,bdk"); + if (bdknode < 0) { + printf("%s: /cavium,bdk is missing from device tree. " + "LMAC's will be disabled\n", __func__); + } + + offset = fdt_node_offset_by_compatible(fdt, -1, "pci-bridge"); + if (offset < 0) + return -ENODEV; + + /* find bgx/rgx node */ + snprintf(name, sizeof(name), "bgx%d", bgx_id); + node = fdt_subnode_offset(fdt, offset, name); + if (node < 0) { + snprintf(name, sizeof(name), "rgx0"); + node = fdt_subnode_offset(fdt, offset, name); + if (node < 0) + return -ENODEV; + } + debug("%s: %s found at %d\n", __func__, name, node); + + /* interate through the LMAC subnodes */ + fdt_for_each_subnode(subnode, fdt, node) { + /* reg required to enable LMAC */ + if (fdt_getprop(fdt, subnode, "reg", &len)) + lmac_reg[lmacid] = 1; + + /* get bus/addr encoded from phy 'reg' prop */ + phy_offset = -1; + phandle = 0; + val = fdt_getprop(fdt, subnode, "phy-handle", &len); + if (val) + phandle = fdt32_to_cpu(*val); + if (phandle) + phy_offset = fdt_node_offset_by_phandle(fdt, phandle); + if (phy_offset > 0) { + val = fdt_getprop(fdt, phy_offset, "reg", NULL); + bus[lmacid] = (fdt32_to_cpu(*val) & (1 << 7)) ? 1 : 0; + addr[lmacid] = fdt32_to_cpu(*val); + } + + /* check for autonegotiation property */ + if (fdt_getprop(fdt, subnode, "cavium,disable-autonegotiation", + &len)) + aneg_dis[lmacid] = 1; + + eth_id++; + lmacid++; + } + + /* determine if LMAC is enabled via bdk prop */ + if (bdknode) { + for (i = 0; i < MAX_LMAC_PER_BGX; i++) { + snprintf(name, sizeof(name), "BGX-ENABLE.N0.BGX%d.P%d", + bgx_id, i); + str = fdt_getprop(fdt, bdknode, name, &len); + if (str) + lmac_en[i] = simple_strtol(str, NULL, 10); + } + } + + bgx_set_board_info(bgx_id, bus, addr, aneg_dis, lmac_reg, lmac_en); + + return 0; +} + +void thunderx_parse_phy_info(void) +{ + int bgx_id; + + for (bgx_id = 0; bgx_id < CONFIG_MAX_BGX_PER_NODE; bgx_id++) + thunderx_parse_phy(bgx_id); +} + +/* Parse MAC addrs from device-tree + * + * The Cavium ThunderX device-trees RGX/BGX MAC's are child nodes of the + * mrml_bridge node alongside the mdio bus named 'rgx<n>' and 'bgx<n>'. + * These nodes have a phy-handle pointing to the mdio bus and a + * local-mac-address property set by firmware. + */ +void thunderx_parse_mac_addr(void) +{ + const uint8_t *mac_addr; + const void *fdt = gd->fdt_blob; + int subnode; + int len, eth_id = 0; + int offset = 0, node, bgx_id = 0, lmacid = 0; + char name[16], env[16]; + + debug("%s\n", __func__); + offset = fdt_node_offset_by_compatible(fdt, -1, "pci-bridge"); + if (offset < 0) + return; + + for (bgx_id = 0; bgx_id < CONFIG_MAX_BGX_PER_NODE; bgx_id++) { + /* find bgx/rgx node */ + snprintf(name, sizeof(name), "bgx%d", bgx_id); + node = fdt_subnode_offset(fdt, offset, name); + if (node < 0) { + snprintf(name, sizeof(name), "rgx0"); + node = fdt_subnode_offset(fdt, offset, name); + if (node < 0) + continue; + } + debug("%s found\n", name); + + fdt_for_each_subnode(subnode, fdt, node) { + sprintf(env, eth_id ? "eth%daddr" : "ethaddr", + eth_id); + mac_addr = fdt_getprop(fdt, subnode, + "local-mac-address", &len); + if (mac_addr) { + debug("got %s MAC:%pM\n", name, mac_addr); + eth_env_set_enetaddr(env, mac_addr); + } + eth_id++; + lmacid++; + } + lmacid = 0; + } +} +#endif // if defined(CONFIG_THUNDERX_VNIC) + /** * If the firmware passed a device tree use it for U-Boot * @@ -47,4 +216,3 @@ int ft_board_setup(void *blob, bd_t *bd)
return ret; } - diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index 28cf2aee22..9f9ac7ea10 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -8,9 +8,12 @@ #include <errno.h> #include <fdt_support.h> #include <malloc.h> +#include <netdev.h> #include <linux/compiler.h>
#include <asm/arch-thunderx/atf.h> +#include <asm/arch-thunderx/fdt.h> +#include <asm/arch-thunderx/thunderx.h> #include <asm/armv8/mmu.h>
#if !CONFIG_IS_ENABLED(OF_CONTROL) @@ -73,8 +76,10 @@ static struct mm_region thunderx_mem_map[] = { struct mm_region *mem_map = thunderx_mem_map;
#ifdef CONFIG_BOARD_EARLY_INIT_R +extern void eth_common_init(void); int board_early_init_r(void) { + eth_common_init(); pci_init();
return 0; @@ -87,6 +92,10 @@ int board_init(void) ulong fdt_addr = (ulong)fdt_base_addr; set_working_fdt_addr(fdt_addr); #endif +#if defined(CONFIG_THUNDERX_VNIC) + thunderx_parse_phy_info(); +#endif + return 0; }
@@ -127,12 +136,25 @@ void reset_cpu(ulong addr) { }
-/* +#if defined(CONFIG_THUNDERX_VNIC) +#if defined(CONFIG_BOARD_LATE_INIT) +/** + * Board late initialization routine. + */ +int board_late_init(void) +{ + thunderx_parse_mac_addr(); + + return 0; +} +#endif + +/** * Board specific ethernet initialization routine. */ int board_eth_init(bd_t *bis) { - int rc = 0; - - return rc; + /* this will init non dt, pci based enet devices that are enabled */ + return pci_eth_init(bis); } +#endif // if defined(CONFIG_THUNDERX_VNIC) diff --git a/configs/thunderx_81xx_defconfig b/configs/thunderx_81xx_defconfig index dbdadde5fc..7ecf658734 100644 --- a/configs/thunderx_81xx_defconfig +++ b/configs/thunderx_81xx_defconfig @@ -27,7 +27,10 @@ CONFIG_CMD_I2C=y CONFIG_CMD_PCI=y CONFIG_CMD_SPI=y CONFIG_CMD_USB=y -# CONFIG_CMD_NET is not set +CONFIG_CMD_DHCP=y +CONFIG_CMD_MII=y +CONFIG_CMD_PING=y +CONFIG_CMD_PXE=y CONFIG_DEFAULT_DEVICE_TREE="thunderx-81xx" CONFIG_DM=y CONFIG_SATA=y @@ -39,6 +42,8 @@ CONFIG_DM_I2C=y CONFIG_LED=y CONFIG_LED_GPIO=y # CONFIG_MMC is not set +CONFIG_THUNDERX_VNIC=y +CONFIG_THUNDERX_BGX=y CONFIG_PCI=y CONFIG_DM_PCI=y CONFIG_DM_PCI_COMPAT=y diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 39687431fb..68fec42f4a 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -327,6 +327,40 @@ config DRIVER_TI_EMAC help Support for davinci emac
+config THUNDERX_VNIC + bool "Cavium ThunderX VNIC support" + depends on ARCH_THUNDERX + select PHYLIB + select MISC + help + This driver supports the Cavium ThunderX VNIC ethernet MAC/PHY/MDIO. + It can be found in CN80XX/CN81XX/CN88XX/CN93XX based SoCs. + +if THUNDERX_VNIC + +config MAX_BGX_PER_NODE + hex "Maximum number of BGX interfaces per CPU node" + default 3 + +config MAX_BGX + hex "Maximum number of BGX interfaces across all nodes" + default 3 + +config THUNDERX_XCV + bool + default y + +config THUNDERX_SMI + bool + default y + +config THUNDERX_BGX + select THUNDERX_XCV + select THUNDERX_SMI + bool "Enable ThunderX BGX networking support" + +endif + config XILINX_AXIEMAC depends on DM_ETH && (MICROBLAZE || ARCH_ZYNQ || ARCH_ZYNQMP) select PHYLIB diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 48a2878071..922b8cdd94 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -58,6 +58,7 @@ obj-$(CONFIG_SMC91111) += smc91111.o obj-$(CONFIG_SMC911X) += smc911x.o obj-$(CONFIG_DRIVER_TI_EMAC) += davinci_emac.o obj-$(CONFIG_TSEC_ENET) += tsec.o fsl_mdio.o +obj-$(CONFIG_THUNDERX_VNIC) += cavium/ obj-$(CONFIG_DRIVER_TI_CPSW) += cpsw.o cpsw-common.o obj-$(CONFIG_FMAN_ENET) += fsl_mdio.o obj-$(CONFIG_ULI526X) += uli526x.o diff --git a/drivers/net/cavium/Makefile b/drivers/net/cavium/Makefile new file mode 100644 index 0000000000..af1834e387 --- /dev/null +++ b/drivers/net/cavium/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for Cavium's Thunder ethernet device +# +obj-y = nic_main.o nicvf_queues.o nicvf_main.o +obj-$(CONFIG_THUNDERX_BGX) += thunder_bgx.o +obj-$(CONFIG_THUNDERX_XCV) += thunder_xcv.o +obj-$(CONFIG_THUNDERX_SMI) += thunderx_smi.o diff --git a/drivers/net/cavium/nic.h b/drivers/net/cavium/nic.h new file mode 100644 index 0000000000..7a61a0ba7a --- /dev/null +++ b/drivers/net/cavium/nic.h @@ -0,0 +1,569 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#ifndef NIC_H +#define NIC_H + +#include <linux/netdevice.h> + +#include "thunder_bgx.h" + +/* Subsystem device IDs */ +#define PCI_SUBSYS_DEVID_88XX_NIC_PF 0xA11E +#define PCI_SUBSYS_DEVID_81XX_NIC_PF 0xA21E +#define PCI_SUBSYS_DEVID_83XX_NIC_PF 0xA31E + +#define PCI_SUBSYS_DEVID_88XX_PASS1_NIC_VF 0xA11E +#define PCI_SUBSYS_DEVID_88XX_NIC_VF 0xA134 +#define PCI_SUBSYS_DEVID_81XX_NIC_VF 0xA234 +#define PCI_SUBSYS_DEVID_83XX_NIC_VF 0xA334 + +#define NIC_INTF_COUNT 2 /* Interfaces btw VNIC and TNS/BGX */ +#define NIC_CHANS_PER_INF 128 +#define NIC_MAX_CHANS (NIC_INTF_COUNT * NIC_CHANS_PER_INF) + +/* PCI BAR nos */ +#define PCI_CFG_REG_BAR_NUM 0 +#define PCI_MSIX_REG_BAR_NUM 4 + +/* NIC SRIOV VF count */ +#define MAX_NUM_VFS_SUPPORTED 128 +#define DEFAULT_NUM_VF_ENABLED 8 + +#define NIC_TNS_BYPASS_MODE 0 +#define NIC_TNS_MODE 1 + +/* NIC priv flags */ +#define NIC_SRIOV_ENABLED BIT(0) +#define NIC_TNS_ENABLED BIT(1) + +/* Min/Max packet size */ +#define NIC_HW_MIN_FRS 64 +#define NIC_HW_MAX_FRS 9190 /* Excluding L2 header and FCS */ + +/* Max pkinds */ +#define NIC_MAX_PKIND 16 + +/* Max when CPI_ALG is IP diffserv */ +#define NIC_MAX_CPI_PER_LMAC 64 + +/* NIC VF Interrupts */ +#define NICVF_INTR_CQ 0 +#define NICVF_INTR_SQ 1 +#define NICVF_INTR_RBDR 2 +#define NICVF_INTR_PKT_DROP 3 +#define NICVF_INTR_TCP_TIMER 4 +#define NICVF_INTR_MBOX 5 +#define NICVF_INTR_QS_ERR 6 + +#define NICVF_INTR_CQ_SHIFT 0 +#define NICVF_INTR_SQ_SHIFT 8 +#define NICVF_INTR_RBDR_SHIFT 16 +#define NICVF_INTR_PKT_DROP_SHIFT 20 +#define NICVF_INTR_TCP_TIMER_SHIFT 21 +#define NICVF_INTR_MBOX_SHIFT 22 +#define NICVF_INTR_QS_ERR_SHIFT 23 + +#define NICVF_INTR_CQ_MASK (0xFF << NICVF_INTR_CQ_SHIFT) +#define NICVF_INTR_SQ_MASK (0xFF << NICVF_INTR_SQ_SHIFT) +#define NICVF_INTR_RBDR_MASK (0x03 << NICVF_INTR_RBDR_SHIFT) +#define NICVF_INTR_PKT_DROP_MASK BIT(NICVF_INTR_PKT_DROP_SHIFT) +#define NICVF_INTR_TCP_TIMER_MASK BIT(NICVF_INTR_TCP_TIMER_SHIFT) +#define NICVF_INTR_MBOX_MASK BIT(NICVF_INTR_MBOX_SHIFT) +#define NICVF_INTR_QS_ERR_MASK BIT(NICVF_INTR_QS_ERR_SHIFT) + +/* MSI-X interrupts */ +#define NIC_PF_MSIX_VECTORS 10 +#define NIC_VF_MSIX_VECTORS 20 + +#define NIC_PF_INTR_ID_ECC0_SBE 0 +#define NIC_PF_INTR_ID_ECC0_DBE 1 +#define NIC_PF_INTR_ID_ECC1_SBE 2 +#define NIC_PF_INTR_ID_ECC1_DBE 3 +#define NIC_PF_INTR_ID_ECC2_SBE 4 +#define NIC_PF_INTR_ID_ECC2_DBE 5 +#define NIC_PF_INTR_ID_ECC3_SBE 6 +#define NIC_PF_INTR_ID_ECC3_DBE 7 +#define NIC_PF_INTR_ID_MBOX0 8 +#define NIC_PF_INTR_ID_MBOX1 9 + +/* Global timer for CQ timer thresh interrupts + * Calculated for SCLK of 700Mhz + * value written should be a 1/16thof what is expected + * + * 1 tick per ms + */ +#define NICPF_CLK_PER_INT_TICK 43750 + +struct nicvf_cq_poll { + u8 cq_idx; /* Completion queue index */ +}; + +#define NIC_MAX_RSS_HASH_BITS 8 +#define NIC_MAX_RSS_IDR_TBL_SIZE (1 << NIC_MAX_RSS_HASH_BITS) +#define RSS_HASH_KEY_SIZE 5 /* 320 bit key */ + +struct nicvf_rss_info { + bool enable; +#define RSS_L2_EXTENDED_HASH_ENA BIT(0) +#define RSS_IP_HASH_ENA BIT(1) +#define RSS_TCP_HASH_ENA BIT(2) +#define RSS_TCP_SYN_DIS BIT(3) +#define RSS_UDP_HASH_ENA BIT(4) +#define RSS_L4_EXTENDED_HASH_ENA BIT(5) +#define RSS_ROCE_ENA BIT(6) +#define RSS_L3_BI_DIRECTION_ENA BIT(7) +#define RSS_L4_BI_DIRECTION_ENA BIT(8) + u64 cfg; + u8 hash_bits; + u16 rss_size; + u8 ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE]; + u64 key[RSS_HASH_KEY_SIZE]; +}; + +struct nicvf_pfc { + u8 autoneg; + u8 fc_rx; + u8 fc_tx; +}; + +enum rx_stats_reg_offset { + RX_OCTS = 0x0, + RX_UCAST = 0x1, + RX_BCAST = 0x2, + RX_MCAST = 0x3, + RX_RED = 0x4, + RX_RED_OCTS = 0x5, + RX_ORUN = 0x6, + RX_ORUN_OCTS = 0x7, + RX_FCS = 0x8, + RX_L2ERR = 0x9, + RX_DRP_BCAST = 0xa, + RX_DRP_MCAST = 0xb, + RX_DRP_L3BCAST = 0xc, + RX_DRP_L3MCAST = 0xd, + RX_STATS_ENUM_LAST, +}; + +enum tx_stats_reg_offset { + TX_OCTS = 0x0, + TX_UCAST = 0x1, + TX_BCAST = 0x2, + TX_MCAST = 0x3, + TX_DROP = 0x4, + TX_STATS_ENUM_LAST, +}; + +struct nicvf_hw_stats { + u64 rx_bytes; + u64 rx_frames; + u64 rx_ucast_frames; + u64 rx_bcast_frames; + u64 rx_mcast_frames; + u64 rx_drops; + u64 rx_drop_red; + u64 rx_drop_red_bytes; + u64 rx_drop_overrun; + u64 rx_drop_overrun_bytes; + u64 rx_drop_bcast; + u64 rx_drop_mcast; + u64 rx_drop_l3_bcast; + u64 rx_drop_l3_mcast; + u64 rx_fcs_errors; + u64 rx_l2_errors; + + u64 tx_bytes; + u64 tx_frames; + u64 tx_ucast_frames; + u64 tx_bcast_frames; + u64 tx_mcast_frames; + u64 tx_drops; +}; + +struct nicvf_drv_stats { + /* CQE Rx errs */ + u64 rx_bgx_truncated_pkts; + u64 rx_jabber_errs; + u64 rx_fcs_errs; + u64 rx_bgx_errs; + u64 rx_prel2_errs; + u64 rx_l2_hdr_malformed; + u64 rx_oversize; + u64 rx_undersize; + u64 rx_l2_len_mismatch; + u64 rx_l2_pclp; + u64 rx_ip_ver_errs; + u64 rx_ip_csum_errs; + u64 rx_ip_hdr_malformed; + u64 rx_ip_payload_malformed; + u64 rx_ip_ttl_errs; + u64 rx_l3_pclp; + u64 rx_l4_malformed; + u64 rx_l4_csum_errs; + u64 rx_udp_len_errs; + u64 rx_l4_port_errs; + u64 rx_tcp_flag_errs; + u64 rx_tcp_offset_errs; + u64 rx_l4_pclp; + u64 rx_truncated_pkts; + + /* CQE Tx errs */ + u64 tx_desc_fault; + u64 tx_hdr_cons_err; + u64 tx_subdesc_err; + u64 tx_max_size_exceeded; + u64 tx_imm_size_oflow; + u64 tx_data_seq_err; + u64 tx_mem_seq_err; + u64 tx_lock_viol; + u64 tx_data_fault; + u64 tx_tstmp_conflict; + u64 tx_tstmp_timeout; + u64 tx_mem_fault; + u64 tx_csum_overlap; + u64 tx_csum_overflow; + + /* driver debug stats */ + u64 tx_tso; + u64 tx_timeout; + u64 txq_stop; + u64 txq_wake; + + u64 rcv_buffer_alloc_failures; + u64 page_alloc; +}; + +struct hw_info { + u8 bgx_cnt; + u8 chans_per_lmac; + u8 chans_per_bgx; /* Rx/Tx chans */ + u8 chans_per_rgx; + u8 chans_per_lbk; + u16 cpi_cnt; + u16 rssi_cnt; + u16 rss_ind_tbl_size; + u16 tl4_cnt; + u16 tl3_cnt; + u8 tl2_cnt; + u8 tl1_cnt; + bool tl1_per_bgx; /* TL1 per BGX or per LMAC */ + u8 model_id; +}; + +struct nicvf { + struct eth_device *netdev; + u8 vf_id; + bool sqs_mode:1; + bool loopback_supported:1; + u8 tns_mode; + u8 node; + u16 mtu; + struct queue_set *qs; +#define MAX_SQS_PER_VF_SINGLE_NODE 5 +#define MAX_SQS_PER_VF 11 + u8 num_qs; + void *addnl_qs; + u16 vf_mtu; + void __iomem *reg_base; +#define MAX_QUEUES_PER_QSET 8 + struct nicvf_cq_poll *napi[8]; + + u8 cpi_alg; + + struct nicvf_hw_stats stats; + struct nicvf_drv_stats drv_stats; + + struct nicpf *nicpf; + + /* VF <-> PF mailbox communication */ + bool pf_acked; + bool pf_nacked; + bool set_mac_pending; + + bool link_up; + u8 duplex; + u32 speed; + u8 rev_id; + u8 rx_queues; + u8 tx_queues; + + bool open; + bool rb_alloc_fail; + void *rcv_buf; + bool hw_tso; +}; + +static inline int node_id(void *addr) +{ + return ((uintptr_t)addr >> 44) & 0x3; +} + +struct nicpf { + struct udevice *udev; + struct eth_device *netdev; + struct hw_info *hw; + u8 node; + unsigned int flags; + u16 total_vf_cnt; /* Total num of VF supported */ + u16 num_vf_en; /* No of VF enabled */ + void __iomem *reg_base; /* Register start address */ + u16 rss_ind_tbl_size; + u8 num_sqs_en; /* Secondary qsets enabled */ + u64 nicvf[MAX_NUM_VFS_SUPPORTED]; + u8 vf_sqs[MAX_NUM_VFS_SUPPORTED][MAX_SQS_PER_VF]; + u8 pqs_vf[MAX_NUM_VFS_SUPPORTED]; + bool sqs_used[MAX_NUM_VFS_SUPPORTED]; + struct pkind_cfg pkind; + u8 bgx_cnt; + u8 rev_id; +#define NIC_SET_VF_LMAC_MAP(bgx, lmac) (((bgx & 0xF) << 4) | (lmac & 0xF)) +#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF) +#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF) + u8 vf_lmac_map[MAX_LMAC]; + u16 cpi_base[MAX_NUM_VFS_SUPPORTED]; + u64 mac[MAX_NUM_VFS_SUPPORTED]; + bool mbx_lock[MAX_NUM_VFS_SUPPORTED]; + u8 link[MAX_LMAC]; + u8 duplex[MAX_LMAC]; + u32 speed[MAX_LMAC]; + bool vf_enabled[MAX_NUM_VFS_SUPPORTED]; + u16 rssi_base[MAX_NUM_VFS_SUPPORTED]; + u8 lmac_cnt; +}; + +/* PF <--> VF Mailbox communication + * Eight 64bit registers are shared between PF and VF. + * Separate set for each VF. + * Writing '1' into last register mbx7 means end of message. + */ + +/* PF <--> VF mailbox communication */ +#define NIC_PF_VF_MAILBOX_SIZE 2 +#define NIC_PF_VF_MBX_TIMEOUT 2000 /* ms */ + +/* Mailbox message types */ +#define NIC_MBOX_MSG_READY 0x01 /* Is PF ready to rcv msgs */ +#define NIC_MBOX_MSG_ACK 0x02 /* ACK the message received */ +#define NIC_MBOX_MSG_NACK 0x03 /* NACK the message received */ +#define NIC_MBOX_MSG_QS_CFG 0x04 /* Configure Qset */ +#define NIC_MBOX_MSG_RQ_CFG 0x05 /* Configure receive queue */ +#define NIC_MBOX_MSG_SQ_CFG 0x06 /* Configure Send queue */ +#define NIC_MBOX_MSG_RQ_DROP_CFG 0x07 /* Configure receive queue */ +#define NIC_MBOX_MSG_SET_MAC 0x08 /* Add MAC ID to DMAC filter */ +#define NIC_MBOX_MSG_SET_MAX_FRS 0x09 /* Set max frame size */ +#define NIC_MBOX_MSG_CPI_CFG 0x0A /* Config CPI, RSSI */ +#define NIC_MBOX_MSG_RSS_SIZE 0x0B /* Get RSS indir_tbl size */ +#define NIC_MBOX_MSG_RSS_CFG 0x0C /* Config RSS table */ +#define NIC_MBOX_MSG_RSS_CFG_CONT 0x0D /* RSS config continuation */ +#define NIC_MBOX_MSG_RQ_BP_CFG 0x0E /* RQ backpressure config */ +#define NIC_MBOX_MSG_RQ_SW_SYNC 0x0F /* Flush inflight pkts to RQ */ +#define NIC_MBOX_MSG_BGX_STATS 0x10 /* Get stats from BGX */ +#define NIC_MBOX_MSG_BGX_LINK_CHANGE 0x11 /* BGX:LMAC link status */ +#define NIC_MBOX_MSG_ALLOC_SQS 0x12 /* Allocate secondary Qset */ +#define NIC_MBOX_MSG_NICVF_PTR 0x13 /* Send nicvf ptr to PF */ +#define NIC_MBOX_MSG_PNICVF_PTR 0x14 /* Get primary qset nicvf ptr */ +#define NIC_MBOX_MSG_SNICVF_PTR 0x15 /* Send sqet nicvf ptr to PVF */ +#define NIC_MBOX_MSG_LOOPBACK 0x16 /* Set interface in loopback */ +#define NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17 /* Reset statistics counters */ +#define NIC_MBOX_MSG_PFC 0x18 /* Pause frame control */ +#define NIC_MBOX_MSG_CFG_DONE 0xF0 /* VF configuration done */ +#define NIC_MBOX_MSG_SHUTDOWN 0xF1 /* VF is being shutdown */ + +struct nic_cfg_msg { + u8 msg; + u8 vf_id; + u8 node_id; + u8 tns_mode:1; + u8 sqs_mode:1; + u8 loopback_supported:1; + u8 mac_addr[ETH_ALEN]; +}; + +/* Qset configuration */ +struct qs_cfg_msg { + u8 msg; + u8 num; + u8 sqs_count; + u64 cfg; +}; + +/* Receive queue configuration */ +struct rq_cfg_msg { + u8 msg; + u8 qs_num; + u8 rq_num; + u64 cfg; +}; + +/* Send queue configuration */ +struct sq_cfg_msg { + u8 msg; + u8 qs_num; + u8 sq_num; + bool sqs_mode; + u64 cfg; +}; + +/* Set VF's MAC address */ +struct set_mac_msg { + u8 msg; + u8 vf_id; + u8 mac_addr[ETH_ALEN]; +}; + +/* Set Maximum frame size */ +struct set_frs_msg { + u8 msg; + u8 vf_id; + u16 max_frs; +}; + +/* Set CPI algorithm type */ +struct cpi_cfg_msg { + u8 msg; + u8 vf_id; + u8 rq_cnt; + u8 cpi_alg; +}; + +/* Get RSS table size */ +struct rss_sz_msg { + u8 msg; + u8 vf_id; + u16 ind_tbl_size; +}; + +/* Set RSS configuration */ +struct rss_cfg_msg { + u8 msg; + u8 vf_id; + u8 hash_bits; + u8 tbl_len; + u8 tbl_offset; +#define RSS_IND_TBL_LEN_PER_MBX_MSG 8 + u8 ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG]; +}; + +struct bgx_stats_msg { + u8 msg; + u8 vf_id; + u8 rx; + u8 idx; + u64 stats; +}; + +/* Physical interface link status */ +struct bgx_link_status { + u8 msg; + u8 link_up; + u8 duplex; + u32 speed; +}; + +#ifdef VNIC_MULTI_QSET_SUPPORT +/* Get Extra Qset IDs */ +struct sqs_alloc { + u8 msg; + u8 vf_id; + u8 qs_count; +}; +#endif + +struct nicvf_ptr { + u8 msg; + u8 vf_id; + bool sqs_mode; + u8 sqs_id; + u64 nicvf; +}; + +/* Set interface in loopback mode */ +struct set_loopback { + u8 msg; + u8 vf_id; + bool enable; +}; + +/* Reset statistics counters */ +struct reset_stat_cfg { + u8 msg; + /* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */ + u16 rx_stat_mask; + /* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */ + u8 tx_stat_mask; + /* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1) + * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1) + * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1) + * .. + * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1) + * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1) + */ + u16 rq_stat_mask; + /* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1) + * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1) + * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1) + * .. + * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1) + * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1) + */ + u16 sq_stat_mask; +}; + +struct pfc { + u8 msg; + u8 get; /* Get or set PFC settings */ + u8 autoneg; + u8 fc_rx; + u8 fc_tx; +}; + +/* 128 bit shared memory between PF and each VF */ +union nic_mbx { + struct { u8 msg; } msg; + struct nic_cfg_msg nic_cfg; + struct qs_cfg_msg qs; + struct rq_cfg_msg rq; + struct sq_cfg_msg sq; + struct set_mac_msg mac; + struct set_frs_msg frs; + struct cpi_cfg_msg cpi_cfg; + struct rss_sz_msg rss_size; + struct rss_cfg_msg rss_cfg; + struct bgx_stats_msg bgx_stats; + struct bgx_link_status link_status; +#ifdef VNIC_MULTI_QSET_SUPPORT + struct sqs_alloc sqs_alloc; + struct nicvf_ptr nicvf; +#endif + struct set_loopback lbk; +}; + +int nicvf_set_real_num_queues(struct eth_device *netdev, + int tx_queues, int rx_queues); +int nicvf_open(struct eth_device *netdev, bd_t *bis); +void nicvf_stop(struct eth_device *netdev); +int nicvf_send_msg_to_pf(struct nicvf *vf, union nic_mbx *mbx); +void nicvf_free_pkt(struct nicvf *nic, void *pkt); +void nicvf_update_stats(struct nicvf *nic); + +void nic_handle_mbx_intr(struct nicpf *nic, int vf); + +int bgx_poll_for_link(int node, int bgx_idx, int lmacid); +const u8 *bgx_get_lmac_mac(int node, int bgx_idx, int lmacid); +void bgx_set_lmac_mac(int node, int bgx_idx, int lmacid, const u8 *mac); +void bgx_lmac_rx_tx_enable(int node, int bgx_idx, int lmacid, bool enable); +void bgx_lmac_internal_loopback(int node, int bgx_idx, + int lmac_idx, bool enable); + +static inline bool pass1_silicon(unsigned int revision, int model_id) +{ + return ((revision < 8) && (model_id == 0x88)); +} + +static inline bool pass2_silicon(unsigned int revision, int model_id) +{ + return ((revision >= 8) && (model_id == 0x88)); +} + +#endif /* NIC_H */ diff --git a/drivers/net/cavium/nic_main.c b/drivers/net/cavium/nic_main.c new file mode 100644 index 0000000000..14639b5e11 --- /dev/null +++ b/drivers/net/cavium/nic_main.c @@ -0,0 +1,795 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <config.h> +#include <common.h> +#include <net.h> +#include <netdev.h> +#include <malloc.h> +#include <miiphy.h> +#include <dm.h> +#include <misc.h> +#include <pci.h> +#include <asm/io.h> + +#include <asm/arch-thunderx/thunderx_vnic.h> + +#include "nic_reg.h" +#include "nic.h" +#include "q_struct.h" +#include "thunder_bgx.h" + +unsigned long rounddown_pow_of_two(unsigned long n) +{ + n |= n >> 1; + n |= n >> 2; + n |= n >> 4; + n |= n >> 8; + n |= n >> 16; + n |= n >> 32; + + return(n + 1); +} + +static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg); +static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic, + struct sq_cfg_msg *sq); +static int nic_update_hw_frs(struct nicpf *nic, int new_frs, int vf); +static int nic_rcv_queue_sw_sync(struct nicpf *nic); + +/* Register read/write APIs */ +static void nic_reg_write(struct nicpf *nic, u64 offset, u64 val) +{ + writeq(val, nic->reg_base + offset); +} + +static u64 nic_reg_read(struct nicpf *nic, u64 offset) +{ + return readq(nic->reg_base + offset); +} + +static u64 nic_get_mbx_addr(int vf) +{ + return NIC_PF_VF_0_127_MAILBOX_0_1 + (vf << NIC_VF_NUM_SHIFT); +} + +/* Send a mailbox message to VF + * @vf: vf to which this message to be sent + * @mbx: Message to be sent + */ +static void nic_send_msg_to_vf(struct nicpf *nic, int vf, union nic_mbx *mbx) +{ + void __iomem *mbx_addr = nic->reg_base + nic_get_mbx_addr(vf); + u64 *msg = (u64 *)mbx; + + /* In first revision HW, mbox interrupt is triggerred + * when PF writes to MBOX(1), in next revisions when + * PF writes to MBOX(0) + */ + if (pass1_silicon(nic->rev_id, nic->hw->model_id)) { + /* see the comment for nic_reg_write()/nic_reg_read() + * functions above + */ + writeq(msg[0], mbx_addr); + writeq(msg[1], mbx_addr + 8); + } else { + writeq(msg[1], mbx_addr + 8); + writeq(msg[0], mbx_addr); + } +} + +/* Responds to VF's READY message with VF's + * ID, node, MAC address e.t.c + * @vf: VF which sent READY message + */ +static void nic_mbx_send_ready(struct nicpf *nic, int vf) +{ + union nic_mbx mbx = {}; + int bgx_idx, lmac, timeout = 5, link = -1; + const u8 *mac; + + mbx.nic_cfg.msg = NIC_MBOX_MSG_READY; + mbx.nic_cfg.vf_id = vf; + + if (nic->flags & NIC_TNS_ENABLED) + mbx.nic_cfg.tns_mode = NIC_TNS_MODE; + else + mbx.nic_cfg.tns_mode = NIC_TNS_BYPASS_MODE; + + if (vf < nic->num_vf_en) { + bgx_idx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + + mac = bgx_get_lmac_mac(nic->node, bgx_idx, lmac); + if (mac) + memcpy((u8 *)&mbx.nic_cfg.mac_addr, mac, 6); + + while (timeout-- && (link <= 0)){ + link = bgx_poll_for_link(nic->node, bgx_idx, lmac); + debug("Link status: %d\n", link); + if (link <= 0) + mdelay(2000); + } + } + mbx.nic_cfg.sqs_mode = (vf >= nic->num_vf_en) ? true : false; + mbx.nic_cfg.node_id = nic->node; + + mbx.nic_cfg.loopback_supported = vf < nic->num_vf_en; + + nic_send_msg_to_vf(nic, vf, &mbx); +} + +/* ACKs VF's mailbox message + * @vf: VF to which ACK to be sent + */ +static void nic_mbx_send_ack(struct nicpf *nic, int vf) +{ + union nic_mbx mbx = {}; + + mbx.msg.msg = NIC_MBOX_MSG_ACK; + nic_send_msg_to_vf(nic, vf, &mbx); +} + +/* NACKs VF's mailbox message that PF is not able to + * complete the action + * @vf: VF to which ACK to be sent + */ +static void nic_mbx_send_nack(struct nicpf *nic, int vf) +{ + union nic_mbx mbx = {}; + + mbx.msg.msg = NIC_MBOX_MSG_NACK; + nic_send_msg_to_vf(nic, vf, &mbx); +} + +static int nic_config_loopback(struct nicpf *nic, struct set_loopback *lbk) +{ + int bgx_idx, lmac_idx; + + if (lbk->vf_id > nic->num_vf_en) + return -1; + + bgx_idx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lbk->vf_id]); + lmac_idx = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lbk->vf_id]); + + bgx_lmac_internal_loopback(nic->node, bgx_idx, lmac_idx, lbk->enable); + + return 0; +} + +/* Interrupt handler to handle mailbox messages from VFs */ +void nic_handle_mbx_intr(struct nicpf *nic, int vf) +{ + union nic_mbx mbx = {}; + u64 *mbx_data; + u64 mbx_addr; + u64 reg_addr; + u64 cfg; + int bgx, lmac; + int i; + int ret = 0; + + nic->mbx_lock[vf] = true; + + mbx_addr = nic_get_mbx_addr(vf); + mbx_data = (u64 *)&mbx; + + for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) { + *mbx_data = nic_reg_read(nic, mbx_addr); + mbx_data++; + mbx_addr += sizeof(u64); + } + + debug("%s: Mailbox msg %d from VF%d\n", + __func__, mbx.msg.msg, vf); + switch (mbx.msg.msg) { + case NIC_MBOX_MSG_READY: + nic_mbx_send_ready(nic, vf); + if (vf < nic->num_vf_en) { + nic->link[vf] = 0; + nic->duplex[vf] = 0; + nic->speed[vf] = 0; + } + ret = 1; + break; + case NIC_MBOX_MSG_QS_CFG: + reg_addr = NIC_PF_QSET_0_127_CFG | + (mbx.qs.num << NIC_QS_ID_SHIFT); + cfg = mbx.qs.cfg; +#ifdef VNIC_MULTI_QSET_SUPPORT + /* Check if its a secondary Qset */ + if (vf >= nic->num_vf_en) { + cfg = cfg & (~0x7FULL); + /* Assign this Qset to primary Qset's VF */ + cfg |= nic->pqs_vf[vf]; + } +#endif + nic_reg_write(nic, reg_addr, cfg); + break; + case NIC_MBOX_MSG_RQ_CFG: + reg_addr = NIC_PF_QSET_0_127_RQ_0_7_CFG | + (mbx.rq.qs_num << NIC_QS_ID_SHIFT) | + (mbx.rq.rq_num << NIC_Q_NUM_SHIFT); + nic_reg_write(nic, reg_addr, mbx.rq.cfg); + /* Enable CQE_RX2_S extension in CQE_RX descriptor. + * This gets appended by default on 81xx/83xx chips, + * for consistency enabling the same on 88xx pass2 + * where this is introduced. + */ + if (pass2_silicon(nic->rev_id, nic->hw->model_id)) + nic_reg_write(nic, NIC_PF_RX_CFG, 0x01); + break; + case NIC_MBOX_MSG_RQ_BP_CFG: + reg_addr = NIC_PF_QSET_0_127_RQ_0_7_BP_CFG | + (mbx.rq.qs_num << NIC_QS_ID_SHIFT) | + (mbx.rq.rq_num << NIC_Q_NUM_SHIFT); + nic_reg_write(nic, reg_addr, mbx.rq.cfg); + break; + case NIC_MBOX_MSG_RQ_SW_SYNC: + ret = nic_rcv_queue_sw_sync(nic); + break; + case NIC_MBOX_MSG_RQ_DROP_CFG: + reg_addr = NIC_PF_QSET_0_127_RQ_0_7_DROP_CFG | + (mbx.rq.qs_num << NIC_QS_ID_SHIFT) | + (mbx.rq.rq_num << NIC_Q_NUM_SHIFT); + nic_reg_write(nic, reg_addr, mbx.rq.cfg); + break; + case NIC_MBOX_MSG_SQ_CFG: + reg_addr = NIC_PF_QSET_0_127_SQ_0_7_CFG | + (mbx.sq.qs_num << NIC_QS_ID_SHIFT) | + (mbx.sq.sq_num << NIC_Q_NUM_SHIFT); + nic_reg_write(nic, reg_addr, mbx.sq.cfg); + nic_tx_channel_cfg(nic, mbx.qs.num, (struct sq_cfg_msg*)&mbx.sq); + break; + case NIC_MBOX_MSG_SET_MAC: +#ifdef VNIC_MULTI_QSET_SUPPORT + if (vf >= nic->num_vf_en) + break; +#endif + lmac = mbx.mac.vf_id; + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lmac]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lmac]); + bgx_set_lmac_mac(nic->node, bgx, lmac, mbx.mac.mac_addr); + break; + case NIC_MBOX_MSG_SET_MAX_FRS: + ret = nic_update_hw_frs(nic, mbx.frs.max_frs, + mbx.frs.vf_id); + break; + case NIC_MBOX_MSG_CPI_CFG: + nic_config_cpi(nic, &mbx.cpi_cfg); + break; + case NIC_MBOX_MSG_CFG_DONE: + /* Last message of VF config msg sequence */ + nic->vf_enabled[vf] = true; + if (vf >= nic->lmac_cnt) + goto unlock; + + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + + bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, true); + goto unlock; + case NIC_MBOX_MSG_SHUTDOWN: + /* First msg in VF teardown sequence */ + nic->vf_enabled[vf] = false; +#ifdef VNIC_MULTI_QSET_SUPPORT + if (vf >= nic->num_vf_en) + nic->sqs_used[vf - nic->num_vf_en] = false; + nic->pqs_vf[vf] = 0; +#endif + if (vf >= nic->lmac_cnt) + break; + + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); + + bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, false); + break; +#ifdef VNIC_MULTI_QSET_SUPPORT + case NIC_MBOX_MSG_ALLOC_SQS: + nic_alloc_sqs(nic, &mbx.sqs_alloc); + goto unlock; + case NIC_MBOX_MSG_NICVF_PTR: + nic->nicvf[vf] = mbx.nicvf.nicvf; + break; + case NIC_MBOX_MSG_PNICVF_PTR: + nic_send_pnicvf(nic, vf); + goto unlock; + case NIC_MBOX_MSG_SNICVF_PTR: + nic_send_snicvf(nic, &mbx.nicvf); + goto unlock; +#endif + case NIC_MBOX_MSG_LOOPBACK: + ret = nic_config_loopback(nic, &mbx.lbk); + break; + default: + printf("Invalid msg from VF%d, msg 0x%x\n", vf, mbx.msg.msg); + break; + } + + if (!ret) + nic_mbx_send_ack(nic, vf); + else if (mbx.msg.msg != NIC_MBOX_MSG_READY) + nic_mbx_send_nack(nic, vf); +unlock: + nic->mbx_lock[vf] = false; +} + +static int nic_rcv_queue_sw_sync(struct nicpf *nic) +{ + int timeout = 20; + + nic_reg_write(nic, NIC_PF_SW_SYNC_RX, 0x01); + /* Wait till sync cycle is finished */ + while (timeout) { + if (nic_reg_read(nic, NIC_PF_SW_SYNC_RX_DONE) & 0x1) + break; + udelay(2000); + timeout--; + } + nic_reg_write(nic, NIC_PF_SW_SYNC_RX, 0x00); + if (!timeout) { + printf("Receive queue software sync failed"); + return 1; + } + return 0; +} + +/* Update hardware min/max frame size */ +static int nic_update_hw_frs(struct nicpf *nic, int new_frs, int vf) +{ + u64 *pkind = (u64 *)&nic->pkind; + if ((new_frs > NIC_HW_MAX_FRS) || (new_frs < NIC_HW_MIN_FRS)) { + printf("Invalid MTU setting from VF%d rejected, should be between %d and %d\n", + vf, NIC_HW_MIN_FRS, NIC_HW_MAX_FRS); + return 1; + } + new_frs += ETH_HLEN; + if (new_frs <= nic->pkind.maxlen) + return 0; + + nic->pkind.maxlen = new_frs; + + nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG, *pkind); + return 0; +} + +/* Set minimum transmit packet size */ +static void nic_set_tx_pkt_pad(struct nicpf *nic, int size) +{ + int lmac; + u64 lmac_cfg; + struct hw_info *hw = nic->hw; + int max_lmac = nic->hw->bgx_cnt * MAX_LMAC_PER_BGX; + + /* There is a issue in HW where-in while sending GSO sized + * pkts as part of TSO, if pkt len falls below this size + * NIC will zero PAD packet and also updates IP total length. + * Hence set this value to lessthan min pkt size of MAC+IP+TCP + * headers, BGX will do the padding to transmit 64 byte pkt. + */ + if (size > 52) + size = 52; + + /* CN81XX has RGX configured as FAKE BGX, adjust mac_lmac accordingly */ + if (hw->chans_per_rgx) + max_lmac = ((nic->hw->bgx_cnt - 1) * MAX_LMAC_PER_BGX) + 1; + + for (lmac = 0; lmac < max_lmac; lmac++) { + lmac_cfg = nic_reg_read(nic, NIC_PF_LMAC_0_7_CFG | (lmac << 3)); + lmac_cfg &= ~(0xF << 2); + lmac_cfg |= ((size / 4) << 2); + nic_reg_write(nic, NIC_PF_LMAC_0_7_CFG | (lmac << 3), lmac_cfg); + } +} + +/* Function to check number of LMACs present and set VF::LMAC mapping. + * Mapping will be used while initializing channels. + */ +static void nic_set_lmac_vf_mapping(struct nicpf *nic) +{ + int bgx, bgx_count, next_bgx_lmac = 0; + int lmac, lmac_cnt = 0; + u64 lmac_credit; + + nic->num_vf_en = 0; + if (nic->flags & NIC_TNS_ENABLED) { + nic->num_vf_en = DEFAULT_NUM_VF_ENABLED; + return; + } + + bgx_get_count(nic->node, &bgx_count); + debug("bgx_count: %d\n", bgx_count); + + for (bgx = 0; bgx < nic->hw->bgx_cnt; bgx++) { + if (!(bgx_count & (1 << bgx))) + continue; + nic->bgx_cnt++; + lmac_cnt = bgx_get_lmac_count(nic->node, bgx); + debug("lmac_cnt: %d for BGX%d\n", lmac_cnt, bgx); + for (lmac = 0; lmac < lmac_cnt; lmac++) + nic->vf_lmac_map[next_bgx_lmac++] = + NIC_SET_VF_LMAC_MAP(bgx, lmac); + nic->num_vf_en += lmac_cnt; + + /* Program LMAC credits */ + lmac_credit = (1ull << 1); /* channel credit enable */ + lmac_credit |= (0x1ff << 2); /* Max outstanding pkt count */ + /* 48KB BGX Tx buffer size, each unit is of size 16bytes */ + lmac_credit |= (((((48 * 1024) / lmac_cnt) - + NIC_HW_MAX_FRS) / 16) << 12); + lmac = bgx * MAX_LMAC_PER_BGX; + for (; lmac < lmac_cnt + (bgx * MAX_LMAC_PER_BGX); lmac++) + nic_reg_write(nic, + NIC_PF_LMAC_0_7_CREDIT + (lmac * 8), lmac_credit); + } +} + +static void nic_get_hw_info(struct nicpf *nic) +{ + u16 sdevid; + struct hw_info *hw = nic->hw; + + dm_pci_read_config16(nic->udev, PCI_SUBSYSTEM_ID, &sdevid); + + switch (sdevid) { + case PCI_SUBSYS_DEVID_88XX_NIC_PF: + hw->bgx_cnt = CONFIG_MAX_BGX_PER_NODE; + hw->chans_per_lmac = 16; + hw->chans_per_bgx = 128; + hw->cpi_cnt = 2048; + hw->rssi_cnt = 4096; + hw->rss_ind_tbl_size = NIC_MAX_RSS_IDR_TBL_SIZE; + hw->tl3_cnt = 256; + hw->tl2_cnt = 64; + hw->tl1_cnt = 2; + hw->tl1_per_bgx = true; + hw->model_id = 0x88; + break; + case PCI_SUBSYS_DEVID_81XX_NIC_PF: + hw->bgx_cnt = CONFIG_MAX_BGX_PER_NODE; + hw->chans_per_lmac = 8; + hw->chans_per_bgx = 32; + hw->chans_per_rgx = 8; + hw->chans_per_lbk = 24; + hw->cpi_cnt = 512; + hw->rssi_cnt = 256; + hw->rss_ind_tbl_size = 32; /* Max RSSI / Max interfaces */ + hw->tl3_cnt = 64; + hw->tl2_cnt = 16; + hw->tl1_cnt = 10; + hw->tl1_per_bgx = false; + hw->model_id = 0x81; + break; + case PCI_SUBSYS_DEVID_83XX_NIC_PF: + hw->bgx_cnt = CONFIG_MAX_BGX_PER_NODE; + hw->chans_per_lmac = 8; + hw->chans_per_bgx = 32; + hw->chans_per_lbk = 64; + hw->cpi_cnt = 2048; + hw->rssi_cnt = 1024; + hw->rss_ind_tbl_size = 64; /* Max RSSI / Max interfaces */ + hw->tl3_cnt = 256; + hw->tl2_cnt = 64; + hw->tl1_cnt = 18; + hw->tl1_per_bgx = false; + hw->model_id = 0x83; + break; + } + + hw->tl4_cnt = MAX_QUEUES_PER_QSET * dm_pci_sriov_get_totalvfs(nic->udev); +} + +static void nic_init_hw(struct nicpf *nic) +{ + int i; + u64 reg; + u64 *pkind = (u64 *)&nic->pkind; + + /* Get HW capability info */ + nic_get_hw_info(nic); + + /* Enable NIC HW block */ + nic_reg_write(nic, NIC_PF_CFG, 0x3); + + /* Enable backpressure */ + nic_reg_write(nic, NIC_PF_BP_CFG, (1ULL << 6) | 0x03); + nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG, (1ULL << 63) | 0x08); + nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG + (1 << 8), (1ULL << 63) | 0x09); + + for (i = 0; i < NIC_MAX_CHANS; i++) + nic_reg_write(nic, NIC_PF_CHAN_0_255_TX_CFG | (i << 3), 1); + + if (nic->flags & NIC_TNS_ENABLED) { + reg = NIC_TNS_MODE << 7; + reg |= 0x06; + nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG, reg); + reg &= ~0xFull; + reg |= 0x07; + nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG | (1 << 8), reg); + } else { + /* Disable TNS mode on both interfaces */ + reg = NIC_TNS_BYPASS_MODE << 7; + reg |= 0x08; /* Block identifier */ + nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG, reg); + reg &= ~0xFull; + reg |= 0x09; + nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG | (1 << 8), reg); + } + + /* PKIND configuration */ + nic->pkind.minlen = 0; + nic->pkind.maxlen = NIC_HW_MAX_FRS + ETH_HLEN; + nic->pkind.lenerr_en = 1; + nic->pkind.rx_hdr = 0; + nic->pkind.hdr_sl = 0; + + for (i = 0; i < NIC_MAX_PKIND; i++) + nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG | (i << 3), *pkind); + + nic_set_tx_pkt_pad(nic, NIC_HW_MIN_FRS); + + /* Timer config */ + nic_reg_write(nic, NIC_PF_INTR_TIMER_CFG, NICPF_CLK_PER_INT_TICK); +} + +/* Channel parse index configuration */ +static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg) +{ + struct hw_info *hw = nic->hw; + u32 vnic, bgx, lmac, chan; + u32 padd, cpi_count = 0; + u64 cpi_base, cpi, rssi_base, rssi; + u8 qset, rq_idx = 0; + + vnic = cfg->vf_id; + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vnic]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vnic]); + + chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx); + cpi_base = vnic * NIC_MAX_CPI_PER_LMAC; + rssi_base = vnic * hw->rss_ind_tbl_size; + + /* Rx channel configuration */ + nic_reg_write(nic, NIC_PF_CHAN_0_255_RX_BP_CFG | (chan << 3), + (1ull << 63) | (vnic << 0)); + nic_reg_write(nic, NIC_PF_CHAN_0_255_RX_CFG | (chan << 3), + ((u64)cfg->cpi_alg << 62) | (cpi_base << 48)); + + if (cfg->cpi_alg == CPI_ALG_NONE) + cpi_count = 1; + else if (cfg->cpi_alg == CPI_ALG_VLAN) /* 3 bits of PCP */ + cpi_count = 8; + else if (cfg->cpi_alg == CPI_ALG_VLAN16) /* 3 bits PCP + DEI */ + cpi_count = 16; + else if (cfg->cpi_alg == CPI_ALG_DIFF) /* 6bits DSCP */ + cpi_count = NIC_MAX_CPI_PER_LMAC; + + /* RSS Qset, Qidx mapping */ + qset = cfg->vf_id; + rssi = rssi_base; + for (; rssi < (rssi_base + cfg->rq_cnt); rssi++) { + nic_reg_write(nic, NIC_PF_RSSI_0_4097_RQ | (rssi << 3), + (qset << 3) | rq_idx); + rq_idx++; + } + + rssi = 0; + cpi = cpi_base; + for (; cpi < (cpi_base + cpi_count); cpi++) { + /* Determine port to channel adder */ + if (cfg->cpi_alg != CPI_ALG_DIFF) + padd = cpi % cpi_count; + else + padd = cpi % 8; /* 3 bits CS out of 6bits DSCP */ + + /* Leave RSS_SIZE as '0' to disable RSS */ + if (pass1_silicon(nic->rev_id, nic->hw->model_id)) { + nic_reg_write(nic, NIC_PF_CPI_0_2047_CFG | (cpi << 3), + (vnic << 24) | (padd << 16) | + (rssi_base + rssi)); + } else { + /* Set MPI_ALG to '0' to disable MCAM parsing */ + nic_reg_write(nic, NIC_PF_CPI_0_2047_CFG | (cpi << 3), + (padd << 16)); + /* MPI index is same as CPI if MPI_ALG is not enabled */ + nic_reg_write(nic, NIC_PF_MPI_0_2047_CFG | (cpi << 3), + (vnic << 24) | (rssi_base + rssi)); + } + + if ((rssi + 1) >= cfg->rq_cnt) + continue; + + if (cfg->cpi_alg == CPI_ALG_VLAN) + rssi++; + else if (cfg->cpi_alg == CPI_ALG_VLAN16) + rssi = ((cpi - cpi_base) & 0xe) >> 1; + else if (cfg->cpi_alg == CPI_ALG_DIFF) + rssi = ((cpi - cpi_base) & 0x38) >> 3; + } + nic->cpi_base[cfg->vf_id] = cpi_base; + nic->rssi_base[cfg->vf_id] = rssi_base; +} + +/* 4 level transmit side scheduler configutation + * for TNS bypass mode + * + * Sample configuration for SQ0 on 88xx + * VNIC0-SQ0 -> TL4(0) -> TL3[0] -> TL2[0] -> TL1[0] -> BGX0 + * VNIC1-SQ0 -> TL4(8) -> TL3[2] -> TL2[0] -> TL1[0] -> BGX0 + * VNIC2-SQ0 -> TL4(16) -> TL3[4] -> TL2[1] -> TL1[0] -> BGX0 + * VNIC3-SQ0 -> TL4(24) -> TL3[6] -> TL2[1] -> TL1[0] -> BGX0 + * VNIC4-SQ0 -> TL4(512) -> TL3[128] -> TL2[32] -> TL1[1] -> BGX1 + * VNIC5-SQ0 -> TL4(520) -> TL3[130] -> TL2[32] -> TL1[1] -> BGX1 + * VNIC6-SQ0 -> TL4(528) -> TL3[132] -> TL2[33] -> TL1[1] -> BGX1 + * VNIC7-SQ0 -> TL4(536) -> TL3[134] -> TL2[33] -> TL1[1] -> BGX1 + */ +static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic, + struct sq_cfg_msg *sq) +{ + struct hw_info *hw = nic->hw; + u32 bgx, lmac, chan; + u32 tl2, tl3, tl4; + u32 rr_quantum; + u8 sq_idx = sq->sq_num; + u8 pqs_vnic = vnic; + int svf; + u16 sdevid; + + dm_pci_read_config16(nic->udev, PCI_SUBSYSTEM_ID, &sdevid); + + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[pqs_vnic]); + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[pqs_vnic]); + + /* 24 bytes for FCS, IPG and preamble */ + rr_quantum = ((NIC_HW_MAX_FRS + 24) / 4); + + /* For 88xx 0-511 TL4 transmits via BGX0 and + * 512-1023 TL4s transmit via BGX1. + */ + if (hw->tl1_per_bgx) { + tl4 = bgx * (hw->tl4_cnt / hw->bgx_cnt); + if (!sq->sqs_mode) { + tl4 += (lmac * MAX_QUEUES_PER_QSET); + } else { + for (svf = 0; svf < MAX_SQS_PER_VF_SINGLE_NODE; svf++) { + if (nic->vf_sqs[pqs_vnic][svf] == vnic) + break; + } + tl4 += (MAX_LMAC_PER_BGX * MAX_QUEUES_PER_QSET); + tl4 += (lmac * MAX_QUEUES_PER_QSET * MAX_SQS_PER_VF_SINGLE_NODE); + tl4 += (svf * MAX_QUEUES_PER_QSET); + } + } else { + tl4 = (vnic * MAX_QUEUES_PER_QSET); + } + tl4 += sq_idx; + + tl3 = tl4 / (hw->tl4_cnt / hw->tl3_cnt); + nic_reg_write(nic, NIC_PF_QSET_0_127_SQ_0_7_CFG2 | + ((u64)vnic << NIC_QS_ID_SHIFT) | + ((u32)sq_idx << NIC_Q_NUM_SHIFT), tl4); + nic_reg_write(nic, NIC_PF_TL4_0_1023_CFG | (tl4 << 3), + ((u64)vnic << 27) | ((u32)sq_idx << 24) | rr_quantum); + + nic_reg_write(nic, NIC_PF_TL3_0_255_CFG | (tl3 << 3), rr_quantum); + + /* On 88xx 0-127 channels are for BGX0 and + * 127-255 channels for BGX1. + * + * On 81xx/83xx TL3_CHAN reg should be configured with channel + * within LMAC i.e 0-7 and not the actual channel number like on 88xx + */ + chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx); + if (hw->tl1_per_bgx) + nic_reg_write(nic, NIC_PF_TL3_0_255_CHAN | (tl3 << 3), chan); + else + nic_reg_write(nic, NIC_PF_TL3_0_255_CHAN | (tl3 << 3), 0); + + /* Enable backpressure on the channel */ + nic_reg_write(nic, NIC_PF_CHAN_0_255_TX_CFG | (chan << 3), 1); + + tl2 = tl3 >> 2; + nic_reg_write(nic, NIC_PF_TL3A_0_63_CFG | (tl2 << 3), tl2); + nic_reg_write(nic, NIC_PF_TL2_0_63_CFG | (tl2 << 3), rr_quantum); + /* No priorities as of now */ + nic_reg_write(nic, NIC_PF_TL2_0_63_PRI | (tl2 << 3), 0x00); + + /* Unlike 88xx where TL2s 0-31 transmits to TL1 '0' and rest to TL1 '1' + * on 81xx/83xx TL2 needs to be configured to transmit to one of the + * possible LMACs. + * + * This register doesn't exist on 88xx. + */ + if (!hw->tl1_per_bgx) + nic_reg_write(nic, NIC_PF_TL2_LMAC | (tl2 << 3), + lmac + (bgx * MAX_LMAC_PER_BGX)); +} + +static int nic_initialize(struct udevice *dev) +{ + struct nicpf *nic = dev_get_priv(dev); + size_t size; + + nic->udev = dev; + nic->hw = calloc(1, sizeof(struct hw_info)); + if (!nic->hw) { + return -ENOMEM; + } + + /* MAP PF's configuration registers */ + nic->reg_base = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + if (!nic->reg_base) { + printf("Cannot map config register space, aborting\n"); + goto exit; + } + + nic->node = node_id(nic->reg_base); + dm_pci_read_config8(dev, PCI_REVISION_ID, &nic->rev_id); + + /* By default set NIC in TNS bypass mode */ + nic->flags &= ~NIC_TNS_ENABLED; + + /* Initialize hardware */ + nic_init_hw(nic); + + nic_set_lmac_vf_mapping(nic); + + /* Set RSS TBL size for each VF */ + nic->rss_ind_tbl_size = NIC_MAX_RSS_IDR_TBL_SIZE; + + nic->rss_ind_tbl_size = rounddown_pow_of_two(nic->rss_ind_tbl_size); + + return 0; +exit: + free(nic->hw); + return -ENODEV; +} + +int thunderx_nic_probe(struct udevice *dev) +{ + int vf; + int ret; + struct nicpf *nicpf; + + nic_initialize(dev); + nicpf = dev_get_priv(dev); + + ret = dm_pci_sriov_init(dev, nicpf->num_vf_en); + + if (ret < 0) { + printf("enabling SRIOV failed for num VFs %d",nicpf->num_vf_en); + return ret; + } + + for (vf = 0; vf < nicpf->num_vf_en; vf++) { + nicvf_initialize(dev, vf); + } + + return 0; +} + +static const struct misc_ops thunderx_nic_ops = { +}; + +static const struct udevice_id thunderx_nic_ids[] = { + { .compatible = "cavium,nic" }, + {} +}; + +U_BOOT_DRIVER(thunderx_nic) = { + .name = "thunderx_nic", + .id = UCLASS_MISC, + .probe = thunderx_nic_probe, + .of_match = thunderx_nic_ids, + .ops = &thunderx_nic_ops, + .priv_auto_alloc_size = sizeof(struct nicpf), +}; + +static struct pci_device_id thunderx_nic_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_NIC_PF) }, + {} +}; + +U_BOOT_PCI_DEVICE(thunderx_nic, thunderx_nic_supported); diff --git a/drivers/net/cavium/nic_reg.h b/drivers/net/cavium/nic_reg.h new file mode 100644 index 0000000000..5a12bb90bf --- /dev/null +++ b/drivers/net/cavium/nic_reg.h @@ -0,0 +1,228 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#ifndef NIC_REG_H +#define NIC_REG_H + +#define NIC_PF_REG_COUNT 29573 +#define NIC_VF_REG_COUNT 249 + +/* Physical function register offsets */ +#define NIC_PF_CFG (0x0000) +#define NIC_PF_STATUS (0x0010) +#define NIC_PF_INTR_TIMER_CFG (0x0030) +#define NIC_PF_BIST_STATUS (0x0040) +#define NIC_PF_SOFT_RESET (0x0050) +#define NIC_PF_TCP_TIMER (0x0060) +#define NIC_PF_BP_CFG (0x0080) +#define NIC_PF_RRM_CFG (0x0088) +#define NIC_PF_CQM_CFG (0x00A0) +#define NIC_PF_CNM_CF (0x00A8) +#define NIC_PF_CNM_STATUS (0x00B0) +#define NIC_PF_CQ_AVG_CFG (0x00C0) +#define NIC_PF_RRM_AVG_CFG (0x00C8) +#define NIC_PF_INTF_0_1_SEND_CFG (0x0200) +#define NIC_PF_INTF_0_1_BP_CFG (0x0208) +#define NIC_PF_INTF_0_1_BP_DIS_0_1 (0x0210) +#define NIC_PF_INTF_0_1_BP_SW_0_1 (0x0220) +#define NIC_PF_RBDR_BP_STATE_0_3 (0x0240) +#define NIC_PF_MAILBOX_INT (0x0410) +#define NIC_PF_MAILBOX_INT_W1S (0x0430) +#define NIC_PF_MAILBOX_ENA_W1C (0x0450) +#define NIC_PF_MAILBOX_ENA_W1S (0x0470) +#define NIC_PF_RX_ETYPE_0_7 (0x0500) +#define NIC_PF_RX_GENEVE_DEF (0x0580) +#define UDP_GENEVE_PORT_NUM 0x17C1ULL +#define NIC_PF_RX_GENEVE_PROT_DEF (0x0588) +#define IPV6_PROT 0x86DDULL +#define IPV4_PROT 0x800ULL +#define ET_PROT 0x6558ULL +#define NIC_PF_RX_NVGRE_PROT_DEF (0x0598) +#define NIC_PF_RX_VXLAN_DEF_0_1 (0x05A0) +#define UDP_VXLAN_PORT_NUM 0x12B5 +#define NIC_PF_RX_VXLAN_PROT_DEF (0x05B0) +#define IPV6_PROT_DEF 0x2ULL +#define IPV4_PROT_DEF 0x1ULL +#define ET_PROT_DEF 0x3ULL +#define NIC_PF_RX_CFG (0x05D0) +#define NIC_PF_PKIND_0_15_CFG (0x0600) +#define NIC_PF_ECC0_FLIP0 (0x1000) +#define NIC_PF_ECC1_FLIP0 (0x1008) +#define NIC_PF_ECC2_FLIP0 (0x1010) +#define NIC_PF_ECC3_FLIP0 (0x1018) +#define NIC_PF_ECC0_FLIP1 (0x1080) +#define NIC_PF_ECC1_FLIP1 (0x1088) +#define NIC_PF_ECC2_FLIP1 (0x1090) +#define NIC_PF_ECC3_FLIP1 (0x1098) +#define NIC_PF_ECC0_CDIS (0x1100) +#define NIC_PF_ECC1_CDIS (0x1108) +#define NIC_PF_ECC2_CDIS (0x1110) +#define NIC_PF_ECC3_CDIS (0x1118) +#define NIC_PF_BIST0_STATUS (0x1280) +#define NIC_PF_BIST1_STATUS (0x1288) +#define NIC_PF_BIST2_STATUS (0x1290) +#define NIC_PF_BIST3_STATUS (0x1298) +#define NIC_PF_ECC0_SBE_INT (0x2000) +#define NIC_PF_ECC0_SBE_INT_W1S (0x2008) +#define NIC_PF_ECC0_SBE_ENA_W1C (0x2010) +#define NIC_PF_ECC0_SBE_ENA_W1S (0x2018) +#define NIC_PF_ECC0_DBE_INT (0x2100) +#define NIC_PF_ECC0_DBE_INT_W1S (0x2108) +#define NIC_PF_ECC0_DBE_ENA_W1C (0x2110) +#define NIC_PF_ECC0_DBE_ENA_W1S (0x2118) +#define NIC_PF_ECC1_SBE_INT (0x2200) +#define NIC_PF_ECC1_SBE_INT_W1S (0x2208) +#define NIC_PF_ECC1_SBE_ENA_W1C (0x2210) +#define NIC_PF_ECC1_SBE_ENA_W1S (0x2218) +#define NIC_PF_ECC1_DBE_INT (0x2300) +#define NIC_PF_ECC1_DBE_INT_W1S (0x2308) +#define NIC_PF_ECC1_DBE_ENA_W1C (0x2310) +#define NIC_PF_ECC1_DBE_ENA_W1S (0x2318) +#define NIC_PF_ECC2_SBE_INT (0x2400) +#define NIC_PF_ECC2_SBE_INT_W1S (0x2408) +#define NIC_PF_ECC2_SBE_ENA_W1C (0x2410) +#define NIC_PF_ECC2_SBE_ENA_W1S (0x2418) +#define NIC_PF_ECC2_DBE_INT (0x2500) +#define NIC_PF_ECC2_DBE_INT_W1S (0x2508) +#define NIC_PF_ECC2_DBE_ENA_W1C (0x2510) +#define NIC_PF_ECC2_DBE_ENA_W1S (0x2518) +#define NIC_PF_ECC3_SBE_INT (0x2600) +#define NIC_PF_ECC3_SBE_INT_W1S (0x2608) +#define NIC_PF_ECC3_SBE_ENA_W1C (0x2610) +#define NIC_PF_ECC3_SBE_ENA_W1S (0x2618) +#define NIC_PF_ECC3_DBE_INT (0x2700) +#define NIC_PF_ECC3_DBE_INT_W1S (0x2708) +#define NIC_PF_ECC3_DBE_ENA_W1C (0x2710) +#define NIC_PF_ECC3_DBE_ENA_W1S (0x2718) +#define NIC_PF_MCAM_0_191_ENA (0x100000) +#define NIC_PF_MCAM_0_191_M_0_5_DATA (0x110000) +#define NIC_PF_MCAM_CTRL (0x120000) +#define NIC_PF_CPI_0_2047_CFG (0x200000) +#define NIC_PF_MPI_0_2047_CFG (0x210000) +#define NIC_PF_RSSI_0_4097_RQ (0x220000) +#define NIC_PF_LMAC_0_7_CFG (0x240000) +#define NIC_PF_LMAC_0_7_CFG2 (0x240100) +#define NIC_PF_LMAC_0_7_SW_XOFF (0x242000) +#define NIC_PF_LMAC_0_7_CREDIT (0x244000) +#define NIC_PF_CHAN_0_255_TX_CFG (0x400000) +#define NIC_PF_CHAN_0_255_RX_CFG (0x420000) +#define NIC_PF_CHAN_0_255_SW_XOFF (0x440000) +#define NIC_PF_CHAN_0_255_CREDIT (0x460000) +#define NIC_PF_CHAN_0_255_RX_BP_CFG (0x480000) +#define NIC_PF_SW_SYNC_RX (0x490000) +#define NIC_PF_SW_SYNC_RX_DONE (0x490008) +#define NIC_PF_TL2_0_63_CFG (0x500000) +#define NIC_PF_TL2_0_63_PRI (0x520000) +#define NIC_PF_TL2_LMAC (0x540000) +#define NIC_PF_TL2_0_63_SH_STATUS (0x580000) +#define NIC_PF_TL3A_0_63_CFG (0x5F0000) +#define NIC_PF_TL3_0_255_CFG (0x600000) +#define NIC_PF_TL3_0_255_CHAN (0x620000) +#define NIC_PF_TL3_0_255_PIR (0x640000) +#define NIC_PF_TL3_0_255_SW_XOFF (0x660000) +#define NIC_PF_TL3_0_255_CNM_RATE (0x680000) +#define NIC_PF_TL3_0_255_SH_STATUS (0x6A0000) +#define NIC_PF_TL4A_0_255_CFG (0x6F0000) +#define NIC_PF_TL4_0_1023_CFG (0x800000) +#define NIC_PF_TL4_0_1023_SW_XOFF (0x820000) +#define NIC_PF_TL4_0_1023_SH_STATUS (0x840000) +#define NIC_PF_TL4A_0_1023_CNM_RATE (0x880000) +#define NIC_PF_TL4A_0_1023_CNM_STATUS (0x8A0000) +#define NIC_PF_VF_0_127_MAILBOX_0_1 (0x20002030) +#define NIC_PF_VNIC_0_127_TX_STAT_0_4 (0x20004000) +#define NIC_PF_VNIC_0_127_RX_STAT_0_13 (0x20004100) +#define NIC_PF_QSET_0_127_LOCK_0_15 (0x20006000) +#define NIC_PF_QSET_0_127_CFG (0x20010000) +#define NIC_PF_QSET_0_127_RQ_0_7_CFG (0x20010400) +#define NIC_PF_QSET_0_127_RQ_0_7_DROP_CFG (0x20010420) +#define NIC_PF_QSET_0_127_RQ_0_7_BP_CFG (0x20010500) +#define NIC_PF_QSET_0_127_RQ_0_7_STAT_0_1 (0x20010600) +#define NIC_PF_QSET_0_127_SQ_0_7_CFG (0x20010C00) +#define NIC_PF_QSET_0_127_SQ_0_7_CFG2 (0x20010C08) +#define NIC_PF_QSET_0_127_SQ_0_7_STAT_0_1 (0x20010D00) + +#define NIC_PF_MSIX_VEC_0_18_ADDR (0x000000) +#define NIC_PF_MSIX_VEC_0_CTL (0x000008) +#define NIC_PF_MSIX_PBA_0 (0x0F0000) + +/* Virtual function register offsets */ +#define NIC_VNIC_CFG (0x000020) +#define NIC_VF_PF_MAILBOX_0_1 (0x000130) +#define NIC_VF_INT (0x000200) +#define NIC_VF_INT_W1S (0x000220) +#define NIC_VF_ENA_W1C (0x000240) +#define NIC_VF_ENA_W1S (0x000260) + +#define NIC_VNIC_RSS_CFG (0x0020E0) +#define NIC_VNIC_RSS_KEY_0_4 (0x002200) +#define NIC_VNIC_TX_STAT_0_4 (0x004000) +#define NIC_VNIC_RX_STAT_0_13 (0x004100) +#define NIC_QSET_RQ_GEN_CFG (0x010010) + +#define NIC_QSET_CQ_0_7_CFG (0x010400) +#define NIC_QSET_CQ_0_7_CFG2 (0x010408) +#define NIC_QSET_CQ_0_7_THRESH (0x010410) +#define NIC_QSET_CQ_0_7_BASE (0x010420) +#define NIC_QSET_CQ_0_7_HEAD (0x010428) +#define NIC_QSET_CQ_0_7_TAIL (0x010430) +#define NIC_QSET_CQ_0_7_DOOR (0x010438) +#define NIC_QSET_CQ_0_7_STATUS (0x010440) +#define NIC_QSET_CQ_0_7_STATUS2 (0x010448) +#define NIC_QSET_CQ_0_7_DEBUG (0x010450) + +#define NIC_QSET_RQ_0_7_CFG (0x010600) +#define NIC_QSET_RQ_0_7_STAT_0_1 (0x010700) + +#define NIC_QSET_SQ_0_7_CFG (0x010800) +#define NIC_QSET_SQ_0_7_THRESH (0x010810) +#define NIC_QSET_SQ_0_7_BASE (0x010820) +#define NIC_QSET_SQ_0_7_HEAD (0x010828) +#define NIC_QSET_SQ_0_7_TAIL (0x010830) +#define NIC_QSET_SQ_0_7_DOOR (0x010838) +#define NIC_QSET_SQ_0_7_STATUS (0x010840) +#define NIC_QSET_SQ_0_7_DEBUG (0x010848) +#define NIC_QSET_SQ_0_7_STAT_0_1 (0x010900) + +#define NIC_QSET_RBDR_0_1_CFG (0x010C00) +#define NIC_QSET_RBDR_0_1_THRESH (0x010C10) +#define NIC_QSET_RBDR_0_1_BASE (0x010C20) +#define NIC_QSET_RBDR_0_1_HEAD (0x010C28) +#define NIC_QSET_RBDR_0_1_TAIL (0x010C30) +#define NIC_QSET_RBDR_0_1_DOOR (0x010C38) +#define NIC_QSET_RBDR_0_1_STATUS0 (0x010C40) +#define NIC_QSET_RBDR_0_1_STATUS1 (0x010C48) +#define NIC_QSET_RBDR_0_1_PREFETCH_STATUS (0x010C50) + +#define NIC_VF_MSIX_VECTOR_0_19_ADDR (0x000000) +#define NIC_VF_MSIX_VECTOR_0_19_CTL (0x000008) +#define NIC_VF_MSIX_PBA (0x0F0000) + +/* Offsets within registers */ +#define NIC_MSIX_VEC_SHIFT 4 +#define NIC_Q_NUM_SHIFT 18 +#define NIC_QS_ID_SHIFT 21 +#define NIC_VF_NUM_SHIFT 21 + +/* Port kind configuration register */ +struct pkind_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_42_63:22; + u64 hdr_sl:5; /* Header skip length */ + u64 rx_hdr:3; /* TNS Receive header present */ + u64 lenerr_en:1;/* L2 length error check enable */ + u64 reserved_32_32:1; + u64 maxlen:16; /* Max frame size */ + u64 minlen:16; /* Min frame size */ +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 minlen:16; + u64 maxlen:16; + u64 reserved_32_32:1; + u64 lenerr_en:1; + u64 rx_hdr:3; + u64 hdr_sl:5; + u64 reserved_42_63:22; +#endif +}; + +#endif /* NIC_REG_H */ diff --git a/drivers/net/cavium/nicvf_main.c b/drivers/net/cavium/nicvf_main.c new file mode 100644 index 0000000000..eeb98b4f51 --- /dev/null +++ b/drivers/net/cavium/nicvf_main.c @@ -0,0 +1,553 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <config.h> +#include <common.h> +#include <dm.h> +#include <environment.h> +#include <pci.h> +#include <net.h> +#include <misc.h> +#include <netdev.h> +#include <malloc.h> +#include <asm/io.h> +#include <asm/arch-thunderx/thunderx_vnic.h> + +#include "nic_reg.h" +#include "nic.h" +#include "nicvf_queues.h" +#include "thunder_bgx.h" + +/* Register read/write APIs */ +void nicvf_reg_write(struct nicvf *nic, u64 offset, u64 val) +{ + writeq(val, nic->reg_base + offset); +} + +u64 nicvf_reg_read(struct nicvf *nic, u64 offset) +{ + return readq(nic->reg_base + offset); +} + +void nicvf_queue_reg_write(struct nicvf *nic, u64 offset, + u64 qidx, u64 val) +{ + void *addr = nic->reg_base + offset; + + writeq(val, (void *)(addr + (qidx << NIC_Q_NUM_SHIFT))); +} + +u64 nicvf_queue_reg_read(struct nicvf *nic, u64 offset, u64 qidx) +{ + void *addr = nic->reg_base + offset; + + return readq((void *)(addr + (qidx << NIC_Q_NUM_SHIFT))); +} + +static void nicvf_handle_mbx_intr(struct nicvf *nic); + +/* VF -> PF mailbox communication */ +static void nicvf_write_to_mbx(struct nicvf *nic, union nic_mbx *mbx) +{ + u64 *msg = (u64 *)mbx; + + nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1 + 0, msg[0]); + nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1 + 8, msg[1]); +} + +int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx) +{ + int timeout = NIC_PF_VF_MBX_TIMEOUT; + int sleep = 10; + + nic->pf_acked = false; + nic->pf_nacked = false; + + nicvf_write_to_mbx(nic, mbx); + + nic_handle_mbx_intr(nic->nicpf, nic->vf_id); + + /* Wait for previous message to be acked, timeout 2sec */ + while (!nic->pf_acked) { + if (nic->pf_nacked) + return -1; + mdelay(sleep); + nicvf_handle_mbx_intr(nic); + + if (nic->pf_acked) + break; + timeout -= sleep; + if (!timeout) { + printf("PF didn't ack to mbox msg %d from VF%d\n", + (mbx->msg.msg & 0xFF), nic->vf_id); + return -1; + } + } + + return 0; +} + + +/* Checks if VF is able to comminicate with PF +* and also gets the VNIC number this VF is associated to. +*/ +static int nicvf_check_pf_ready(struct nicvf *nic) +{ + union nic_mbx mbx = {}; + + mbx.msg.msg = NIC_MBOX_MSG_READY; + if (nicvf_send_msg_to_pf(nic, &mbx)) { + printf("PF didn't respond to READY msg\n"); + return 0; + } + + return 1; +} + +static void nicvf_handle_mbx_intr(struct nicvf *nic) +{ + union nic_mbx mbx = {}; + u64 *mbx_data; + u64 mbx_addr; + int i; + + mbx_addr = NIC_VF_PF_MAILBOX_0_1; + mbx_data = (u64 *)&mbx; + + for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) { + *mbx_data = nicvf_reg_read(nic, mbx_addr); + mbx_data++; + mbx_addr += sizeof(u64); + } + + debug("Mbox message: msg: 0x%x\n", mbx.msg.msg); + switch (mbx.msg.msg) { + case NIC_MBOX_MSG_READY: + nic->pf_acked = true; + nic->vf_id = mbx.nic_cfg.vf_id & 0x7F; + nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F; + nic->node = mbx.nic_cfg.node_id; + if (!nic->set_mac_pending) + memcpy(nic->netdev->enetaddr, + mbx.nic_cfg.mac_addr, 6); + nic->loopback_supported = mbx.nic_cfg.loopback_supported; + nic->link_up = false; + nic->duplex = 0; + nic->speed = 0; + break; + case NIC_MBOX_MSG_ACK: + nic->pf_acked = true; + break; + case NIC_MBOX_MSG_NACK: + nic->pf_nacked = true; + break; + case NIC_MBOX_MSG_BGX_LINK_CHANGE: + nic->pf_acked = true; + nic->link_up = mbx.link_status.link_up; + nic->duplex = mbx.link_status.duplex; + nic->speed = mbx.link_status.speed; + if (nic->link_up) { + printf("%s: Link is Up %d Mbps %s\n", + nic->netdev->name, nic->speed, + nic->duplex == 1 ? + "Full duplex" : "Half duplex"); + } else { + printf("%s: Link is Down\n", + nic->netdev->name); + } + break; + default: + printf("Invalid message from PF, msg 0x%x\n", mbx.msg.msg); + break; + } + + nicvf_clear_intr(nic, NICVF_INTR_MBOX, 0); +} + +static int nicvf_hw_set_mac_addr(struct nicvf *nic, struct eth_device *netdev) +{ + union nic_mbx mbx = {}; + + mbx.mac.msg = NIC_MBOX_MSG_SET_MAC; + mbx.mac.vf_id = nic->vf_id; + memcpy(mbx.mac.mac_addr, netdev->enetaddr, 6); + + return nicvf_send_msg_to_pf(nic, &mbx); +} + +static void nicvf_config_cpi(struct nicvf *nic) +{ + union nic_mbx mbx = {}; + + mbx.cpi_cfg.msg = NIC_MBOX_MSG_CPI_CFG; + mbx.cpi_cfg.vf_id = nic->vf_id; + mbx.cpi_cfg.cpi_alg = nic->cpi_alg; + mbx.cpi_cfg.rq_cnt = nic->qs->rq_cnt; + + nicvf_send_msg_to_pf(nic, &mbx); +} + + +static int nicvf_init_resources(struct nicvf *nic) +{ + int err; + + nic->num_qs = 1; + + /* Enable Qset */ + nicvf_qset_config(nic, true); + + /* Initialize queues and HW for data transfer */ + err = nicvf_config_data_transfer(nic, true); + + if (err) { + printf("Failed to alloc/config VF's QSet resources\n"); + return err; + } + return 0; +} + +void nicvf_free_pkt(struct nicvf *nic, void *pkt) +{ + free(pkt); +} + +static void nicvf_snd_pkt_handler(struct nicvf *nic, + struct cmp_queue *cq, + void *cq_desc, int cqe_type) +{ + struct cqe_send_t *cqe_tx; + struct snd_queue *sq; + struct sq_hdr_subdesc *hdr; + + cqe_tx = (struct cqe_send_t *)cq_desc; + sq = &nic->qs->sq[cqe_tx->sq_idx]; + + hdr = (struct sq_hdr_subdesc *)GET_SQ_DESC(sq, cqe_tx->sqe_ptr); + if (hdr->subdesc_type != SQ_DESC_TYPE_HEADER) + return; + + nicvf_check_cqe_tx_errs(nic, cq, cq_desc); + nicvf_put_sq_desc(sq, hdr->subdesc_cnt + 1); +} + +static int nicvf_rcv_pkt_handler(struct nicvf *nic, + struct cmp_queue *cq, void *cq_desc, + void **ppkt, int cqe_type) +{ + void *pkt; + + size_t pkt_len; + struct cqe_rx_t *cqe_rx = (struct cqe_rx_t *)cq_desc; + int err = 0; + + /* Check for errors */ + err = nicvf_check_cqe_rx_errs(nic, cq, cq_desc); + if (err && !cqe_rx->rb_cnt) + return -1; + + pkt = nicvf_get_rcv_pkt(nic, cq_desc, &pkt_len); + if (!pkt) { + debug("Packet not received\n"); + return -1; + } + + if (pkt) + *ppkt = pkt; + + return pkt_len; +} + +static int nicvf_cq_handler(struct nicvf *nic, void **ppkt, int *pkt_len) +{ + int cq_qnum = 0; + int processed_sq_cqe = 0; + int processed_rq_cqe = 0; + int processed_cqe = 0; + + unsigned long cqe_count, cqe_head; + struct queue_set *qs = nic->qs; + struct cmp_queue *cq = &qs->cq[cq_qnum]; + struct cqe_rx_t *cq_desc; + + /* Get num of valid CQ entries expect next one to be SQ completion */ + cqe_count = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_STATUS, cq_qnum); + cqe_count &= 0xFFFF; + if (!cqe_count) + return 0; + + /* Get head of the valid CQ entries */ + cqe_head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, cq_qnum) >> 9; + cqe_head &= 0xFFFF; + + if (cqe_count) { + /* Get the CQ descriptor */ + cq_desc = (struct cqe_rx_t *)GET_CQ_DESC(cq, cqe_head); + cqe_head++; + cqe_head &= (cq->dmem.q_len - 1); + /* Initiate prefetch for next descriptor */ + prefetch((struct cqe_rx_t *)GET_CQ_DESC(cq, cqe_head)); + + switch (cq_desc->cqe_type) { + case CQE_TYPE_RX: + debug("%s: Got Rx CQE\n", nic->netdev->name); + *pkt_len = nicvf_rcv_pkt_handler(nic, cq, cq_desc, ppkt, CQE_TYPE_RX); + processed_rq_cqe++; + break; + case CQE_TYPE_SEND: + debug("%s: Got Tx CQE\n", nic->netdev->name); + nicvf_snd_pkt_handler(nic, cq, cq_desc, CQE_TYPE_SEND); + processed_sq_cqe++; + break; + default: + debug("%s: Got CQ type %u\n", nic->netdev->name, cq_desc->cqe_type); + break; + } + processed_cqe++; + } + + + /* Dequeue CQE */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_DOOR, + cq_qnum, processed_cqe); + + asm volatile ("dsb sy"); + + return (processed_sq_cqe | processed_rq_cqe) ; +} + +/* Qset error interrupt handler + * + * As of now only CQ errors are handled + */ +void nicvf_handle_qs_err(struct nicvf *nic) +{ + struct queue_set *qs = nic->qs; + int qidx; + u64 status; + + /* Check if it is CQ err */ + for (qidx = 0; qidx < qs->cq_cnt; qidx++) { + status = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_STATUS, + qidx); + if (!(status & CQ_ERR_MASK)) + continue; + /* Process already queued CQEs and reconfig CQ */ + nicvf_sq_disable(nic, qidx); + nicvf_cmp_queue_config(nic, qs, qidx, true); + nicvf_sq_free_used_descs(nic->netdev, &qs->sq[qidx], qidx); + nicvf_sq_enable(nic, &qs->sq[qidx], qidx); + } +} + +int nicvf_xmit(struct eth_device *netdev, void *pkt, int pkt_len) +{ + struct nicvf *nic = netdev->priv; + int ret = 0; + int rcv_len = 0; + unsigned int timeout = 5000; + void *rpkt = NULL; + + if (!nicvf_sq_append_pkt(nic, pkt, pkt_len)) { + printf("VF%d: TX ring full\n", nic->vf_id); + return -1; + } + + /* check and update CQ for pkt sent */ + while (!ret && timeout--) { + ret = nicvf_cq_handler(nic, &rpkt, &rcv_len); + if (!ret) + { + debug("%s: %d, Not sent\n", __FUNCTION__, __LINE__); + udelay(10); + } + } + + return 0; +} + +int nicvf_recv(struct eth_device *netdev) +{ + struct nicvf *nic = netdev->priv; + void *pkt; + int pkt_len = 0; +#ifdef DEBUG + u8 *dpkt; + int i, j; +#endif + + nicvf_cq_handler(nic, &pkt, &pkt_len); + + if (pkt_len) { +#ifdef DEBUG + dpkt = pkt; + printf("RX packet contents:\n"); + for (i = 0; i < 8; i++) { + puts("\t"); + for (j = 0; j < 10; j++) { + printf("%02x ", dpkt[i * 10 + j]); + } + puts("\n"); + } +#endif + net_process_received_packet(pkt, pkt_len); + nicvf_refill_rbdr(nic); + } + + return pkt_len; +} + +void nicvf_stop(struct eth_device *netdev) +{ + struct nicvf *nic = netdev->priv; + + if (!nic->open) + return; + + /* Free resources */ + nicvf_config_data_transfer(nic, false); + + /* Disable HW Qset */ + nicvf_qset_config(nic, false); + + nic->open = false; +} + +int nicvf_open(struct eth_device *netdev, bd_t *bis) +{ + int err; + struct nicvf *nic = netdev->priv; + + nicvf_hw_set_mac_addr(nic, netdev); + + /* Configure CPI alorithm */ + nic->cpi_alg = CPI_ALG_NONE; + nicvf_config_cpi(nic); + + /* Initialize the queues */ + err = nicvf_init_resources(nic); + if (err) + return -1; + + if (!nicvf_check_pf_ready(nic)) { + return -1; + } + + nic->open = true; + + /* Make sure queue initialization is written */ + asm volatile("dsb sy"); + + return 0; +} + +int nicvf_initialize(struct udevice *pdev, int vf_num) +{ + struct eth_device *netdev = NULL; + struct nicvf *nicvf = NULL; + int ret; + size_t size; + + netdev = calloc(1, sizeof(struct eth_device)); + + if (!netdev) { + ret = -ENOMEM; + goto fail; + } + + nicvf = calloc(1, sizeof(struct nicvf)); + + if (!nicvf) { + ret = -ENOMEM; + goto fail; + } + + netdev->priv = nicvf; + nicvf->netdev = netdev; + nicvf->nicpf = dev_get_priv(pdev); + nicvf->vf_id = vf_num; + + + /* Enable TSO support */ + nicvf->hw_tso = true; + + nicvf->reg_base = dm_pci_map_bar(pdev, 9, &size, PCI_REGION_MEM); + + nicvf->reg_base += size * nicvf->vf_id; + + debug("nicvf->reg_base: %p\n", nicvf->reg_base); + + if (!nicvf->reg_base) { + printf("Cannot map config register space, aborting\n"); + ret = -1; + goto fail; + } + + ret = nicvf_set_qset_resources(nicvf); + if (ret) + return -1; + + snprintf(netdev->name, sizeof(netdev->name), "vnic%u", nicvf->vf_id); + + netdev->halt = nicvf_stop; + netdev->init = nicvf_open; + netdev->send = nicvf_xmit; + netdev->recv = nicvf_recv; + + if (!eth_env_get_enetaddr_by_index("eth", nicvf->vf_id, netdev->enetaddr)) { + eth_env_get_enetaddr("ethaddr", netdev->enetaddr); + netdev->enetaddr[5] += nicvf->vf_id; + } + + ret = eth_register(netdev); + + if (!ret) + return 0; + + printf("Failed to register netdevice\n"); + +fail: + if (nicvf) + free(nicvf); + if(netdev) + free(netdev); + return ret; +} + +int thunderx_vnic_probe(struct udevice *dev) +{ + void *regs; + size_t size; + + regs = dm_pci_map_bar(dev, 9, &size, PCI_REGION_MEM); + + debug("%s: %d, regs: %p\n", __FUNCTION__, __LINE__, regs); + + return 0; +} + +static const struct misc_ops thunderx_vnic_ops = { +}; + +static const struct udevice_id thunderx_vnic_ids[] = { + { .compatible = "cavium,vnic" }, + {} +}; + +U_BOOT_DRIVER(thunderx_vnic) = { + .name = "thunderx_vnic", + .id = UCLASS_MISC, + .probe = thunderx_vnic_probe, + .of_match = thunderx_vnic_ids, + .ops = &thunderx_vnic_ops, + .priv_auto_alloc_size = sizeof(struct nicvf), +}; + +static struct pci_device_id thunderx_vnic_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_NIC_VF_1) }, + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_NIC_VF) }, + {} +}; + +U_BOOT_PCI_DEVICE(thunderx_vnic, thunderx_vnic_supported); diff --git a/drivers/net/cavium/nicvf_queues.c b/drivers/net/cavium/nicvf_queues.c new file mode 100644 index 0000000000..562b528e44 --- /dev/null +++ b/drivers/net/cavium/nicvf_queues.c @@ -0,0 +1,1123 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <config.h> +#include <common.h> +#include <net.h> +#include <netdev.h> +#include <malloc.h> + +#include "nic_reg.h" +#include "nic.h" +#include "q_struct.h" +#include "nicvf_queues.h" + +/* Poll a register for a specific value */ +static int nicvf_poll_reg(struct nicvf *nic, int qidx, + u64 reg, int bit_pos, int bits, int val) +{ + u64 bit_mask; + u64 reg_val; + int timeout = 10; + + bit_mask = (1ULL << bits) - 1; + bit_mask = (bit_mask << bit_pos); + + while (timeout) { + reg_val = nicvf_queue_reg_read(nic, reg, qidx); + if (((reg_val & bit_mask) >> bit_pos) == val) + return 0; + udelay(2000); + timeout--; + } + printf("Poll on reg 0x%llx failed\n", reg); + return 1; +} + +/* Allocate memory for a queue's descriptors */ +static int nicvf_alloc_q_desc_mem(struct nicvf *nic, struct q_desc_mem *dmem, + int q_len, int desc_size, int align_bytes) +{ + dmem->q_len = q_len; + dmem->size = (desc_size * q_len) + align_bytes; + /* Save address, need it while freeing */ + dmem->unalign_base = calloc(1, dmem->size); + dmem->dma = (uintptr_t)dmem->unalign_base; + if (!dmem->unalign_base) + return -ENOMEM; + + /* Align memory address for 'align_bytes' */ + dmem->phys_base = NICVF_ALIGNED_ADDR((u64)dmem->dma, align_bytes); + dmem->base = dmem->unalign_base + (dmem->phys_base - dmem->dma); + + return 0; +} + +/* Free queue's descriptor memory */ +static void nicvf_free_q_desc_mem(struct nicvf *nic, struct q_desc_mem *dmem) +{ + if (!dmem) + return; + + free(dmem->unalign_base); + dmem->unalign_base = NULL; + dmem->base = NULL; +} + + +static void *nicvf_rb_ptr_to_pkt(struct nicvf *nic, uintptr_t rb_ptr) +{ + return (void *)rb_ptr; +} + +static int nicvf_init_rbdr(struct nicvf *nic, struct rbdr *rbdr, + int ring_len, int buf_size) +{ + int idx; + uintptr_t rbuf; + struct rbdr_entry_t *desc; + + if (nicvf_alloc_q_desc_mem(nic, &rbdr->dmem, ring_len, + sizeof(struct rbdr_entry_t), + NICVF_RCV_BUF_ALIGN_BYTES)) { + printf("Unable to allocate memory for rcv buffer ring\n"); + return -1; + } + + rbdr->desc = rbdr->dmem.base; + /* Buffer size has to be in multiples of 128 bytes */ + rbdr->dma_size = buf_size; + rbdr->enable = true; + rbdr->thresh = RBDR_THRESH; + + debug("%s: %d: allocating %lld bytes for rcv buffers\n", + __FUNCTION__, __LINE__, + ring_len * buf_size + NICVF_RCV_BUF_ALIGN_BYTES); + rbdr->buf_mem = (uintptr_t)calloc(1, ring_len * buf_size + + NICVF_RCV_BUF_ALIGN_BYTES); + + if (!rbdr->buf_mem) { + printf("Unable to allocate memory for rcv buffers\n"); + return -1; + } + + rbdr->buffers = NICVF_ALIGNED_ADDR(rbdr->buf_mem, NICVF_RCV_BUF_ALIGN_BYTES); + + debug("%s: %d: rbdr->buf_mem: %lx, rbdr->buffers: %lx\n", + __FUNCTION__, __LINE__, rbdr->buf_mem, rbdr->buffers); + + for (idx = 0; idx < ring_len; idx++) { + rbuf = rbdr->buffers + DMA_BUFFER_LEN * idx; + desc = GET_RBDR_DESC(rbdr, idx); + desc->buf_addr = rbuf >> NICVF_RCV_BUF_ALIGN; + flush_dcache_range((uintptr_t)desc, (uintptr_t)desc + sizeof(desc)); + } + return 0; +} + +/* Free RBDR ring and its receive buffers */ +static void nicvf_free_rbdr(struct nicvf *nic, struct rbdr *rbdr) +{ + if (!rbdr) + return; + + rbdr->enable = false; + if (!rbdr->dmem.base) + return; + + debug("%s: %d: rbdr->buf_mem: %p\n", __FUNCTION__, + __LINE__, (void *)rbdr->buf_mem); + free((void *)rbdr->buf_mem); + + /* Free RBDR ring */ + nicvf_free_q_desc_mem(nic, &rbdr->dmem); +} + +/* Refill receive buffer descriptors with new buffers. + */ +void nicvf_refill_rbdr(struct nicvf *nic) +{ + struct queue_set *qs = nic->qs; + int rbdr_idx = qs->rbdr_cnt; + unsigned long qcount, head, tail, rb_cnt; + struct rbdr *rbdr; + + if (!rbdr_idx) + return; + rbdr_idx--; + rbdr = &qs->rbdr[rbdr_idx]; + /* Check if it's enabled */ + if (!rbdr->enable) { + printf("Receive queue %d is disabled\n", rbdr_idx); + return; + } + + /* check if valid descs reached or crossed threshold level */ + qcount = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, rbdr_idx); + head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, rbdr_idx); + tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, rbdr_idx); + + qcount &= 0x7FFFF; + + rb_cnt = qs->rbdr_len - qcount - 1; + + debug("%s: %d: qcount: %lu, head: %lx, tail: %lx, rb_cnt: %lu\n", + __FUNCTION__, __LINE__, qcount, head, tail, rb_cnt); + + + /* Notify HW */ + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, rbdr_idx, rb_cnt); + + asm volatile ("dsb sy"); +} + + +/* TBD: how to handle full packets received in CQ + * i.e conversion of buffers into SKBs + */ +static int nicvf_init_cmp_queue(struct nicvf *nic, + struct cmp_queue *cq, int q_len) +{ + if (nicvf_alloc_q_desc_mem(nic, &cq->dmem, q_len, + CMP_QUEUE_DESC_SIZE, + NICVF_CQ_BASE_ALIGN_BYTES)) { + printf("Unable to allocate memory for completion queue\n"); + return -1; + } + cq->desc = cq->dmem.base; + if (!pass1_silicon(nic->rev_id, nic->nicpf->hw->model_id)) + cq->thresh = CMP_QUEUE_CQE_THRESH; + else + cq->thresh = 0; + cq->intr_timer_thresh = CMP_QUEUE_TIMER_THRESH; + + return 0; +} + +static void nicvf_free_cmp_queue(struct nicvf *nic, struct cmp_queue *cq) +{ + if (!cq) + return; + if (!cq->dmem.base) + return; + + nicvf_free_q_desc_mem(nic, &cq->dmem); +} + +static int nicvf_init_snd_queue(struct nicvf *nic, + struct snd_queue *sq, int q_len) +{ + if (nicvf_alloc_q_desc_mem(nic, &sq->dmem, q_len, + SND_QUEUE_DESC_SIZE, + NICVF_SQ_BASE_ALIGN_BYTES)) { + printf("Unable to allocate memory for send queue\n"); + return -1; + } + + sq->desc = sq->dmem.base; + sq->skbuff = calloc(q_len, sizeof(u64)); + sq->head = 0; + sq->tail = 0; + sq->free_cnt = q_len - 1; + sq->thresh = SND_QUEUE_THRESH; + + return 0; +} + +static void nicvf_free_snd_queue(struct nicvf *nic, struct snd_queue *sq) +{ + if (!sq) + return; + if (!sq->dmem.base) + return; + + debug("%s: %d\n", __FUNCTION__, __LINE__); + free(sq->skbuff); + + nicvf_free_q_desc_mem(nic, &sq->dmem); +} + +static void nicvf_reclaim_snd_queue(struct nicvf *nic, + struct queue_set *qs, int qidx) +{ + /* Disable send queue */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0); + /* Check if SQ is stopped */ + if (nicvf_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS, 21, 1, 0x01)) + return; + /* Reset send queue */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET); +} + +static void nicvf_reclaim_rcv_queue(struct nicvf *nic, + struct queue_set *qs, int qidx) +{ + union nic_mbx mbx = {}; + + /* Make sure all packets in the pipeline are written back into mem */ + mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC; + nicvf_send_msg_to_pf(nic, &mbx); +} + +static void nicvf_reclaim_cmp_queue(struct nicvf *nic, + struct queue_set *qs, int qidx) +{ + /* Disable timer threshold (doesn't get reset upon CQ reset */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0); + /* Disable completion queue */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0); + /* Reset completion queue */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET); +} + +static void nicvf_reclaim_rbdr(struct nicvf *nic, + struct rbdr *rbdr, int qidx) +{ + u64 tmp, fifo_state; + int timeout = 10; + + /* Save head and tail pointers for feeing up buffers */ + rbdr->head = nicvf_queue_reg_read(nic, + NIC_QSET_RBDR_0_1_HEAD, + qidx) >> 3; + rbdr->tail = nicvf_queue_reg_read(nic, + NIC_QSET_RBDR_0_1_TAIL, + qidx) >> 3; + + /* If RBDR FIFO is in 'FAIL' state then do a reset first + * before relaiming. + */ + fifo_state = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx); + if (((fifo_state >> 62) & 0x03) == 0x3) + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, + qidx, NICVF_RBDR_RESET); + + /* Disable RBDR */ + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0); + if (nicvf_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00)) + return; + while (1) { + tmp = nicvf_queue_reg_read(nic, + NIC_QSET_RBDR_0_1_PREFETCH_STATUS, + qidx); + if ((tmp & 0xFFFFFFFF) == ((tmp >> 32) & 0xFFFFFFFF)) + break; + mdelay(2000); + timeout--; + if (!timeout) { + printf("Failed polling on prefetch status\n"); + return; + } + } + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, + qidx, NICVF_RBDR_RESET); + + if (nicvf_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02)) + return; + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00); + if (nicvf_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00)) + return; +} + + +/* Configures receive queue */ +static void nicvf_rcv_queue_config(struct nicvf *nic, struct queue_set *qs, + int qidx, bool enable) +{ + union nic_mbx mbx = {}; + struct rcv_queue *rq; + union { + struct rq_cfg s; + u64 u; + } rq_cfg; + + rq = &qs->rq[qidx]; + rq->enable = enable; + + /* Disable receive queue */ + nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0); + + if (!rq->enable) { + nicvf_reclaim_rcv_queue(nic, qs, qidx); + return; + } + + rq->cq_qs = qs->vnic_id; + rq->cq_idx = qidx; + rq->start_rbdr_qs = qs->vnic_id; + rq->start_qs_rbdr_idx = qs->rbdr_cnt - 1; + rq->cont_rbdr_qs = qs->vnic_id; + rq->cont_qs_rbdr_idx = qs->rbdr_cnt - 1; + /* all writes of RBDR data to be loaded into L2 Cache as well*/ + rq->caching = 1; + + /* Send a mailbox msg to PF to config RQ */ + mbx.rq.msg = NIC_MBOX_MSG_RQ_CFG; + mbx.rq.qs_num = qs->vnic_id; + mbx.rq.rq_num = qidx; + mbx.rq.cfg = (rq->caching << 26) | (rq->cq_qs << 19) | + (rq->cq_idx << 16) | (rq->cont_rbdr_qs << 9) | + (rq->cont_qs_rbdr_idx << 8) | + (rq->start_rbdr_qs << 1) | (rq->start_qs_rbdr_idx); + nicvf_send_msg_to_pf(nic, &mbx); + + mbx.rq.msg = NIC_MBOX_MSG_RQ_BP_CFG; + mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (qs->vnic_id << 0); + nicvf_send_msg_to_pf(nic, &mbx); + + /* RQ drop config + * Enable CQ drop to reserve sufficient CQEs for all tx packets + */ + mbx.rq.msg = NIC_MBOX_MSG_RQ_DROP_CFG; + mbx.rq.cfg = (1ULL << 62) | (RQ_CQ_DROP << 8); + nicvf_send_msg_to_pf(nic, &mbx); + nicvf_queue_reg_write(nic, NIC_QSET_RQ_GEN_CFG, 0, 0x00); + + /* Enable Receive queue */ + rq_cfg.s.ena = 1; + rq_cfg.s.tcp_ena = 0; + nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.u); +} + +void nicvf_cmp_queue_config(struct nicvf *nic, struct queue_set *qs, + int qidx, bool enable) +{ + struct cmp_queue *cq; + union { + u64 u; + struct cq_cfg s; + } cq_cfg; + + cq = &qs->cq[qidx]; + cq->enable = enable; + + if (!cq->enable) { + nicvf_reclaim_cmp_queue(nic, qs, qidx); + return; + } + + /* Reset completion queue */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET); + + if (!cq->enable) + return; + + /* Set completion queue base address */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, + qidx, (u64)(cq->dmem.phys_base)); + + /* Enable Completion queue */ + cq_cfg.s.ena = 1; + cq_cfg.s.reset = 0; + cq_cfg.s.caching = 0; + cq_cfg.s.qsize = CMP_QSIZE; + cq_cfg.s.avg_con = 0; + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.u); + + /* Set threshold value for interrupt generation */ + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, cq->thresh); + nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, cq->intr_timer_thresh); +} + +/* Configures transmit queue */ +static void nicvf_snd_queue_config(struct nicvf *nic, struct queue_set *qs, + int qidx, bool enable) +{ + union nic_mbx mbx = {}; + struct snd_queue *sq; + + union { + struct sq_cfg s; + u64 u; + } sq_cfg; + + sq = &qs->sq[qidx]; + sq->enable = enable; + + if (!sq->enable) { + nicvf_reclaim_snd_queue(nic, qs, qidx); + return; + } + + /* Reset send queue */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET); + + sq->cq_qs = qs->vnic_id; + sq->cq_idx = qidx; + + /* Send a mailbox msg to PF to config SQ */ + mbx.sq.msg = NIC_MBOX_MSG_SQ_CFG; + mbx.sq.qs_num = qs->vnic_id; + mbx.sq.sq_num = qidx; + mbx.sq.sqs_mode = nic->sqs_mode; + mbx.sq.cfg = (sq->cq_qs << 3) | sq->cq_idx; + nicvf_send_msg_to_pf(nic, &mbx); + + /* Set queue base address */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, + qidx, (u64)(sq->dmem.phys_base)); + + /* Enable send queue & set queue size */ + sq_cfg.s.ena = 1; + sq_cfg.s.reset = 0; + sq_cfg.s.ldwb = 0; + sq_cfg.s.qsize = SND_QSIZE; + sq_cfg.s.tstmp_bgx_intf = 0; + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.u); + + /* Set threshold value for interrupt generation */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_THRESH, qidx, sq->thresh); +} + +/* Configures receive buffer descriptor ring */ +static void nicvf_rbdr_config(struct nicvf *nic, struct queue_set *qs, + int qidx, bool enable) +{ + struct rbdr *rbdr; + union { + struct rbdr_cfg s; + u64 u; + } rbdr_cfg; + + rbdr = &qs->rbdr[qidx]; + nicvf_reclaim_rbdr(nic, rbdr, qidx); + if (!enable) + return; + + /* Set descriptor base address */ + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, + qidx, (u64)(rbdr->dmem.phys_base)); + + /* Enable RBDR & set queue size */ + /* Buffer size should be in multiples of 128 bytes */ + rbdr_cfg.s.ena = 1; + rbdr_cfg.s.reset = 0; + rbdr_cfg.s.ldwb = 0; + rbdr_cfg.s.qsize = RBDR_SIZE; + rbdr_cfg.s.avg_con = 0; + rbdr_cfg.s.lines = rbdr->dma_size / 128; + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, + qidx, rbdr_cfg.u); + + /* Notify HW */ + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, + qidx, qs->rbdr_len - 1); + + /* Set threshold value for interrupt generation */ + nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_THRESH, + qidx, rbdr->thresh - 1); +} + +/* Requests PF to assign and enable Qset */ +void nicvf_qset_config(struct nicvf *nic, bool enable) +{ + union nic_mbx mbx = {}; + struct queue_set *qs = nic->qs; + struct qs_cfg *qs_cfg; + + if (!qs) { + printf("Qset is still not allocated, don't init queues\n"); + return; + } + + qs->enable = enable; + qs->vnic_id = nic->vf_id; + + /* Send a mailbox msg to PF to config Qset */ + mbx.qs.msg = NIC_MBOX_MSG_QS_CFG; + mbx.qs.num = qs->vnic_id; +#ifdef VNIC_MULTI_QSET_SUPPORT + mbx.qs.sqs_count = nic->sqs_count; +#endif + + mbx.qs.cfg = 0; + qs_cfg = (struct qs_cfg *)&mbx.qs.cfg; + if (qs->enable) { + qs_cfg->ena = 1; +#ifdef __BIG_ENDIAN + qs_cfg->be = 1; +#endif + qs_cfg->vnic = qs->vnic_id; + } + nicvf_send_msg_to_pf(nic, &mbx); +} + +static void nicvf_free_resources(struct nicvf *nic) +{ + int qidx; + struct queue_set *qs = nic->qs; + + /* Free receive buffer descriptor ring */ + for (qidx = 0; qidx < qs->rbdr_cnt; qidx++) + nicvf_free_rbdr(nic, &qs->rbdr[qidx]); + + /* Free completion queue */ + for (qidx = 0; qidx < qs->cq_cnt; qidx++) + nicvf_free_cmp_queue(nic, &qs->cq[qidx]); + + /* Free send queue */ + for (qidx = 0; qidx < qs->sq_cnt; qidx++) + nicvf_free_snd_queue(nic, &qs->sq[qidx]); +} + +static int nicvf_alloc_resources(struct nicvf *nic) +{ + int qidx; + struct queue_set *qs = nic->qs; + + /* Alloc receive buffer descriptor ring */ + for (qidx = 0; qidx < qs->rbdr_cnt; qidx++) { + if (nicvf_init_rbdr(nic, &qs->rbdr[qidx], qs->rbdr_len, + DMA_BUFFER_LEN)) + goto alloc_fail; + } + + /* Alloc send queue */ + for (qidx = 0; qidx < qs->sq_cnt; qidx++) { + if (nicvf_init_snd_queue(nic, &qs->sq[qidx], qs->sq_len)) + goto alloc_fail; + } + + /* Alloc completion queue */ + for (qidx = 0; qidx < qs->cq_cnt; qidx++) { + if (nicvf_init_cmp_queue(nic, &qs->cq[qidx], qs->cq_len)) + goto alloc_fail; + } + + return 0; +alloc_fail: + nicvf_free_resources(nic); + return -1; +} + +int nicvf_set_qset_resources(struct nicvf *nic) +{ + struct queue_set *qs; + + qs = calloc(1, sizeof(struct queue_set)); + if (!qs) + return -1; + nic->qs = qs; + + /* Set count of each queue */ + qs->rbdr_cnt = RBDR_CNT; + qs->rq_cnt = 1; + qs->sq_cnt = SND_QUEUE_CNT; + qs->cq_cnt = CMP_QUEUE_CNT; + + /* Set queue lengths */ + qs->rbdr_len = RCV_BUF_COUNT; + qs->sq_len = SND_QUEUE_LEN; + qs->cq_len = CMP_QUEUE_LEN; + + nic->rx_queues = qs->rq_cnt; + nic->tx_queues = qs->sq_cnt; + + return 0; +} + +int nicvf_config_data_transfer(struct nicvf *nic, bool enable) +{ + bool disable = false; + struct queue_set *qs = nic->qs; + int qidx; + + if (!qs) + return 0; + + if (enable) { + if (nicvf_alloc_resources(nic)) + return -1; + + for (qidx = 0; qidx < qs->sq_cnt; qidx++) + nicvf_snd_queue_config(nic, qs, qidx, enable); + for (qidx = 0; qidx < qs->cq_cnt; qidx++) + nicvf_cmp_queue_config(nic, qs, qidx, enable); + for (qidx = 0; qidx < qs->rbdr_cnt; qidx++) + nicvf_rbdr_config(nic, qs, qidx, enable); + for (qidx = 0; qidx < qs->rq_cnt; qidx++) + nicvf_rcv_queue_config(nic, qs, qidx, enable); + } else { + for (qidx = 0; qidx < qs->rq_cnt; qidx++) + nicvf_rcv_queue_config(nic, qs, qidx, disable); + for (qidx = 0; qidx < qs->rbdr_cnt; qidx++) + nicvf_rbdr_config(nic, qs, qidx, disable); + for (qidx = 0; qidx < qs->sq_cnt; qidx++) + nicvf_snd_queue_config(nic, qs, qidx, disable); + for (qidx = 0; qidx < qs->cq_cnt; qidx++) + nicvf_cmp_queue_config(nic, qs, qidx, disable); + + nicvf_free_resources(nic); + } + + return 0; +} + +/* Get a free desc from SQ + * returns descriptor ponter & descriptor number + */ +static int nicvf_get_sq_desc(struct snd_queue *sq, int desc_cnt) +{ + int qentry; + + qentry = sq->tail; + sq->free_cnt -= desc_cnt; + sq->tail += desc_cnt; + sq->tail &= (sq->dmem.q_len - 1); + + return qentry; +} + +/* Free descriptor back to SQ for future use */ +void nicvf_put_sq_desc(struct snd_queue *sq, int desc_cnt) +{ + sq->free_cnt += desc_cnt; + sq->head += desc_cnt; + sq->head &= (sq->dmem.q_len - 1); +} + +static int nicvf_get_nxt_sqentry(struct snd_queue *sq, int qentry) +{ + qentry++; + qentry &= (sq->dmem.q_len - 1); + return qentry; +} + +void nicvf_sq_enable(struct nicvf *nic, struct snd_queue *sq, int qidx) +{ + u64 sq_cfg; + + sq_cfg = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx); + sq_cfg |= NICVF_SQ_EN; + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg); + /* Ring doorbell so that H/W restarts processing SQEs */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0); +} + +void nicvf_sq_disable(struct nicvf *nic, int qidx) +{ + u64 sq_cfg; + + sq_cfg = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx); + sq_cfg &= ~NICVF_SQ_EN; + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg); +} + +void nicvf_sq_free_used_descs(struct eth_device *netdev, struct snd_queue *sq, + int qidx) +{ + u64 head; + struct nicvf *nic = netdev->priv; + struct sq_hdr_subdesc *hdr; + + head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4; + + while (sq->head != head) { + hdr = (struct sq_hdr_subdesc *)GET_SQ_DESC(sq, sq->head); + if (hdr->subdesc_type != SQ_DESC_TYPE_HEADER) { + nicvf_put_sq_desc(sq, 1); + continue; + } + nicvf_put_sq_desc(sq, hdr->subdesc_cnt + 1); + } +} + +/* Get the number of SQ descriptors needed to xmit this skb */ +static int nicvf_sq_subdesc_required(struct nicvf *nic) +{ + int subdesc_cnt = MIN_SQ_DESC_PER_PKT_XMIT; + + return subdesc_cnt; +} + +/* Add SQ HEADER subdescriptor. + * First subdescriptor for every send descriptor. + */ +static inline void +nicvf_sq_add_hdr_subdesc(struct nicvf *nic, struct snd_queue *sq, int qentry, + int subdesc_cnt, void *pkt, size_t pkt_len) +{ + struct sq_hdr_subdesc *hdr; + + hdr = (struct sq_hdr_subdesc *)GET_SQ_DESC(sq, qentry); + sq->skbuff[qentry] = (uintptr_t)pkt; + + memset(hdr, 0, SND_QUEUE_DESC_SIZE); + hdr->subdesc_type = SQ_DESC_TYPE_HEADER; + /* Enable notification via CQE after processing SQE */ + hdr->post_cqe = 1; + /* No of subdescriptors following this */ + hdr->subdesc_cnt = subdesc_cnt; + hdr->tot_len = pkt_len; + + flush_dcache_range((uintptr_t)hdr, + (uintptr_t)hdr + sizeof(struct sq_hdr_subdesc)); +} + +/* SQ GATHER subdescriptor + * Must follow HDR descriptor + */ +static inline void nicvf_sq_add_gather_subdesc(struct snd_queue *sq, int qentry, + size_t size, uintptr_t data) +{ + struct sq_gather_subdesc *gather; + + qentry &= (sq->dmem.q_len - 1); + gather = (struct sq_gather_subdesc *)GET_SQ_DESC(sq, qentry); + + memset(gather, 0, SND_QUEUE_DESC_SIZE); + gather->subdesc_type = SQ_DESC_TYPE_GATHER; + gather->ld_type = NIC_SEND_LD_TYPE_E_LDD; + gather->size = size; + gather->addr = data; + + flush_dcache_range((uintptr_t)gather, + (uintptr_t)gather + sizeof(struct sq_hdr_subdesc)); +} + +/* Append an skb to a SQ for packet transfer. */ +int nicvf_sq_append_pkt(struct nicvf *nic, void *pkt, size_t pkt_size) +{ + int subdesc_cnt; + int sq_num = 0, qentry; + struct queue_set *qs; + struct snd_queue *sq; + + qs = nic->qs; + sq = &qs->sq[sq_num]; + + subdesc_cnt = nicvf_sq_subdesc_required(nic); + if (subdesc_cnt > sq->free_cnt) + goto append_fail; + + qentry = nicvf_get_sq_desc(sq, subdesc_cnt); + + /* Add SQ header subdesc */ + nicvf_sq_add_hdr_subdesc(nic, sq, qentry, subdesc_cnt - 1, + pkt, pkt_size); + + /* Add SQ gather subdescs */ + qentry = nicvf_get_nxt_sqentry(sq, qentry); + nicvf_sq_add_gather_subdesc(sq, qentry, pkt_size, (uintptr_t)(pkt)); + + flush_dcache_range((uintptr_t)pkt, + (uintptr_t)pkt + pkt_size); + + /* make sure all memory stores are done before ringing doorbell */ + asm volatile ("dsb sy"); + + /* Inform HW to xmit new packet */ + nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, + sq_num, subdesc_cnt); + return 1; + +append_fail: + printf("Not enough SQ descriptors to xmit pkt\n"); + return 0; +} + +static unsigned frag_num(unsigned i) +{ +#ifdef __BIG_ENDIAN + return (i & ~3) + 3 - (i & 3); +#else + return i; +#endif +} + +void *nicvf_get_rcv_pkt(struct nicvf *nic, void *cq_desc, size_t *pkt_len) +{ + int frag; + int payload_len = 0; + void *pkt = NULL; + struct cqe_rx_t *cqe_rx; + struct rbdr *rbdr; + struct rcv_queue *rq; + struct queue_set *qs = nic->qs; + uint16_t *rb_lens = NULL; + u64 *rb_ptrs = NULL; + + cqe_rx = (struct cqe_rx_t *)cq_desc; + + rq = &qs->rq[cqe_rx->rq_idx]; + rbdr = &qs->rbdr[rq->start_qs_rbdr_idx]; + rb_lens = cq_desc + (3 * sizeof(u64)); /* Use offsetof */ + /* Except 88xx pass1 on all other chips CQE_RX2_S is added to + * CQE_RX at word6, hence buffer pointers move by word + * + * Use existing 'hw_tso' flag which will be set for all chips + * except 88xx pass1 instead of a additional cache line + * access (or miss) by using pci dev's revision. + */ + if (!nic->hw_tso) + rb_ptrs = (void *)cqe_rx + (6 * sizeof(u64)); + else + rb_ptrs = (void *)cqe_rx + (7 * sizeof(u64)); + + for (frag = 0; frag < cqe_rx->rb_cnt; frag++) { + payload_len = rb_lens[frag_num(frag)]; + + invalidate_dcache_range((uintptr_t)(*rb_ptrs), + (uintptr_t)(*rb_ptrs) + rbdr->dma_size); + + /* First fragment */ + *rb_ptrs = *rb_ptrs - cqe_rx->align_pad; + + pkt = nicvf_rb_ptr_to_pkt(nic, *rb_ptrs); + + invalidate_dcache_range((uintptr_t)pkt, + (uintptr_t)pkt + payload_len); + + if (cqe_rx->align_pad) { + pkt += cqe_rx->align_pad; + } + /* Next buffer pointer */ + rb_ptrs++; + + *pkt_len = payload_len; + } + return pkt; +} + +/* Clear interrupt */ +void nicvf_clear_intr(struct nicvf *nic, int int_type, int q_idx) +{ + u64 reg_val = 0; + + switch (int_type) { + case NICVF_INTR_CQ: + reg_val = ((1ULL << q_idx) << NICVF_INTR_CQ_SHIFT); + break; + case NICVF_INTR_SQ: + reg_val = ((1ULL << q_idx) << NICVF_INTR_SQ_SHIFT); + break; + case NICVF_INTR_RBDR: + reg_val = ((1ULL << q_idx) << NICVF_INTR_RBDR_SHIFT); + break; + case NICVF_INTR_PKT_DROP: + reg_val = (1ULL << NICVF_INTR_PKT_DROP_SHIFT); + break; + case NICVF_INTR_TCP_TIMER: + reg_val = (1ULL << NICVF_INTR_TCP_TIMER_SHIFT); + break; + case NICVF_INTR_MBOX: + reg_val = (1ULL << NICVF_INTR_MBOX_SHIFT); + break; + case NICVF_INTR_QS_ERR: + reg_val |= (1ULL << NICVF_INTR_QS_ERR_SHIFT); + break; + default: + printf("Failed to clear interrupt: unknown type\n"); + break; + } + + nicvf_reg_write(nic, NIC_VF_INT, reg_val); +} + +void nicvf_update_rq_stats(struct nicvf *nic, int rq_idx) +{ + struct rcv_queue *rq; + +#define GET_RQ_STATS(reg) \ + nicvf_reg_read(nic, NIC_QSET_RQ_0_7_STAT_0_1 |\ + (rq_idx << NIC_Q_NUM_SHIFT) | (reg << 3)) + + rq = &nic->qs->rq[rq_idx]; + rq->stats.bytes = GET_RQ_STATS(RQ_SQ_STATS_OCTS); + rq->stats.pkts = GET_RQ_STATS(RQ_SQ_STATS_PKTS); +} + +void nicvf_update_sq_stats(struct nicvf *nic, int sq_idx) +{ + struct snd_queue *sq; + +#define GET_SQ_STATS(reg) \ + nicvf_reg_read(nic, NIC_QSET_SQ_0_7_STAT_0_1 |\ + (sq_idx << NIC_Q_NUM_SHIFT) | (reg << 3)) + + sq = &nic->qs->sq[sq_idx]; + sq->stats.bytes = GET_SQ_STATS(RQ_SQ_STATS_OCTS); + sq->stats.pkts = GET_SQ_STATS(RQ_SQ_STATS_PKTS); +} + +/* Check for errors in the receive cmp.queue entry */ +int nicvf_check_cqe_rx_errs(struct nicvf *nic, + struct cmp_queue *cq, void *cq_desc) +{ + struct cqe_rx_t *cqe_rx; + struct cmp_queue_stats *stats = &cq->stats; + + cqe_rx = (struct cqe_rx_t *)cq_desc; + if (!cqe_rx->err_level && !cqe_rx->err_opcode) { + stats->rx.errop.good++; + return 0; + } + + switch (cqe_rx->err_level) { + case CQ_ERRLVL_MAC: + stats->rx.errlvl.mac_errs++; + break; + case CQ_ERRLVL_L2: + stats->rx.errlvl.l2_errs++; + break; + case CQ_ERRLVL_L3: + stats->rx.errlvl.l3_errs++; + break; + case CQ_ERRLVL_L4: + stats->rx.errlvl.l4_errs++; + break; + } + + switch (cqe_rx->err_opcode) { + case CQ_RX_ERROP_RE_PARTIAL: + stats->rx.errop.partial_pkts++; + break; + case CQ_RX_ERROP_RE_JABBER: + stats->rx.errop.jabber_errs++; + break; + case CQ_RX_ERROP_RE_FCS: + stats->rx.errop.fcs_errs++; + break; + case CQ_RX_ERROP_RE_TERMINATE: + stats->rx.errop.terminate_errs++; + break; + case CQ_RX_ERROP_RE_RX_CTL: + stats->rx.errop.bgx_rx_errs++; + break; + case CQ_RX_ERROP_PREL2_ERR: + stats->rx.errop.prel2_errs++; + break; + case CQ_RX_ERROP_L2_FRAGMENT: + stats->rx.errop.l2_frags++; + break; + case CQ_RX_ERROP_L2_OVERRUN: + stats->rx.errop.l2_overruns++; + break; + case CQ_RX_ERROP_L2_PFCS: + stats->rx.errop.l2_pfcs++; + break; + case CQ_RX_ERROP_L2_PUNY: + stats->rx.errop.l2_puny++; + break; + case CQ_RX_ERROP_L2_MAL: + stats->rx.errop.l2_hdr_malformed++; + break; + case CQ_RX_ERROP_L2_OVERSIZE: + stats->rx.errop.l2_oversize++; + break; + case CQ_RX_ERROP_L2_UNDERSIZE: + stats->rx.errop.l2_undersize++; + break; + case CQ_RX_ERROP_L2_LENMISM: + stats->rx.errop.l2_len_mismatch++; + break; + case CQ_RX_ERROP_L2_PCLP: + stats->rx.errop.l2_pclp++; + break; + case CQ_RX_ERROP_IP_NOT: + stats->rx.errop.non_ip++; + break; + case CQ_RX_ERROP_IP_CSUM_ERR: + stats->rx.errop.ip_csum_err++; + break; + case CQ_RX_ERROP_IP_MAL: + stats->rx.errop.ip_hdr_malformed++; + break; + case CQ_RX_ERROP_IP_MALD: + stats->rx.errop.ip_payload_malformed++; + break; + case CQ_RX_ERROP_IP_HOP: + stats->rx.errop.ip_hop_errs++; + break; + case CQ_RX_ERROP_L3_ICRC: + stats->rx.errop.l3_icrc_errs++; + break; + case CQ_RX_ERROP_L3_PCLP: + stats->rx.errop.l3_pclp++; + break; + case CQ_RX_ERROP_L4_MAL: + stats->rx.errop.l4_malformed++; + break; + case CQ_RX_ERROP_L4_CHK: + stats->rx.errop.l4_csum_errs++; + break; + case CQ_RX_ERROP_UDP_LEN: + stats->rx.errop.udp_len_err++; + break; + case CQ_RX_ERROP_L4_PORT: + stats->rx.errop.bad_l4_port++; + break; + case CQ_RX_ERROP_TCP_FLAG: + stats->rx.errop.bad_tcp_flag++; + break; + case CQ_RX_ERROP_TCP_OFFSET: + stats->rx.errop.tcp_offset_errs++; + break; + case CQ_RX_ERROP_L4_PCLP: + stats->rx.errop.l4_pclp++; + break; + case CQ_RX_ERROP_RBDR_TRUNC: + stats->rx.errop.pkt_truncated++; + break; + } + + return 1; +} + +/* Check for errors in the send cmp.queue entry */ +int nicvf_check_cqe_tx_errs(struct nicvf *nic, + struct cmp_queue *cq, void *cq_desc) +{ + struct cqe_send_t *cqe_tx; + struct cmp_queue_stats *stats = &cq->stats; + + cqe_tx = (struct cqe_send_t *)cq_desc; + switch (cqe_tx->send_status) { + case CQ_TX_ERROP_GOOD: + stats->tx.good++; + return 0; + break; + case CQ_TX_ERROP_DESC_FAULT: + stats->tx.desc_fault++; + break; + case CQ_TX_ERROP_HDR_CONS_ERR: + stats->tx.hdr_cons_err++; + break; + case CQ_TX_ERROP_SUBDC_ERR: + stats->tx.subdesc_err++; + break; + case CQ_TX_ERROP_IMM_SIZE_OFLOW: + stats->tx.imm_size_oflow++; + break; + case CQ_TX_ERROP_DATA_SEQUENCE_ERR: + stats->tx.data_seq_err++; + break; + case CQ_TX_ERROP_MEM_SEQUENCE_ERR: + stats->tx.mem_seq_err++; + break; + case CQ_TX_ERROP_LOCK_VIOL: + stats->tx.lock_viol++; + break; + case CQ_TX_ERROP_DATA_FAULT: + stats->tx.data_fault++; + break; + case CQ_TX_ERROP_TSTMP_CONFLICT: + stats->tx.tstmp_conflict++; + break; + case CQ_TX_ERROP_TSTMP_TIMEOUT: + stats->tx.tstmp_timeout++; + break; + case CQ_TX_ERROP_MEM_FAULT: + stats->tx.mem_fault++; + break; + case CQ_TX_ERROP_CK_OVERLAP: + stats->tx.csum_overlap++; + break; + case CQ_TX_ERROP_CK_OFLOW: + stats->tx.csum_overflow++; + break; + } + + return 1; +} diff --git a/drivers/net/cavium/nicvf_queues.h b/drivers/net/cavium/nicvf_queues.h new file mode 100644 index 0000000000..2c476b4a74 --- /dev/null +++ b/drivers/net/cavium/nicvf_queues.h @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#ifndef NICVF_QUEUES_H +#define NICVF_QUEUES_H + +#include "q_struct.h" + +#define MAX_QUEUE_SET 128 +#define MAX_RCV_QUEUES_PER_QS 8 +#define MAX_RCV_BUF_DESC_RINGS_PER_QS 2 +#define MAX_SND_QUEUES_PER_QS 8 +#define MAX_CMP_QUEUES_PER_QS 8 + +/* VF's queue interrupt ranges */ +#define NICVF_INTR_ID_CQ 0 +#define NICVF_INTR_ID_SQ 8 +#define NICVF_INTR_ID_RBDR 16 +#define NICVF_INTR_ID_MISC 18 +#define NICVF_INTR_ID_QS_ERR 19 + +#define for_each_cq_irq(irq) \ + for (irq = NICVF_INTR_ID_CQ; irq < NICVF_INTR_ID_SQ; irq++) +#define for_each_sq_irq(irq) \ + for (irq = NICVF_INTR_ID_SQ; irq < NICVF_INTR_ID_RBDR; irq++) +#define for_each_rbdr_irq(irq) \ + for (irq = NICVF_INTR_ID_RBDR; irq < NICVF_INTR_ID_MISC; irq++) + +#define RBDR_SIZE0 0ULL /* 8K entries */ +#define RBDR_SIZE1 1ULL /* 16K entries */ +#define RBDR_SIZE2 2ULL /* 32K entries */ +#define RBDR_SIZE3 3ULL /* 64K entries */ +#define RBDR_SIZE4 4ULL /* 126K entries */ +#define RBDR_SIZE5 5ULL /* 256K entries */ +#define RBDR_SIZE6 6ULL /* 512K entries */ + +#define SND_QUEUE_SIZE0 0ULL /* 1K entries */ +#define SND_QUEUE_SIZE1 1ULL /* 2K entries */ +#define SND_QUEUE_SIZE2 2ULL /* 4K entries */ +#define SND_QUEUE_SIZE3 3ULL /* 8K entries */ +#define SND_QUEUE_SIZE4 4ULL /* 16K entries */ +#define SND_QUEUE_SIZE5 5ULL /* 32K entries */ +#define SND_QUEUE_SIZE6 6ULL /* 64K entries */ + +#define CMP_QUEUE_SIZE0 0ULL /* 1K entries */ +#define CMP_QUEUE_SIZE1 1ULL /* 2K entries */ +#define CMP_QUEUE_SIZE2 2ULL /* 4K entries */ +#define CMP_QUEUE_SIZE3 3ULL /* 8K entries */ +#define CMP_QUEUE_SIZE4 4ULL /* 16K entries */ +#define CMP_QUEUE_SIZE5 5ULL /* 32K entries */ +#define CMP_QUEUE_SIZE6 6ULL /* 64K entries */ + +/* Default queue count per QS, its lengths and threshold values */ +#define RBDR_CNT 1 +#define RCV_QUEUE_CNT 1 +#define SND_QUEUE_CNT 1 +#define CMP_QUEUE_CNT 1 /* Max of RCV and SND qcount */ + +#define SND_QSIZE SND_QUEUE_SIZE0 +#define SND_QUEUE_LEN (1ULL << (SND_QSIZE + 10)) +#define SND_QUEUE_THRESH 2ULL +#define MIN_SQ_DESC_PER_PKT_XMIT 2 +#define MAX_CQE_PER_PKT_XMIT 2 + +#define CMP_QSIZE CMP_QUEUE_SIZE0 +#define CMP_QUEUE_LEN (1ULL << (CMP_QSIZE + 10)) +#define CMP_QUEUE_CQE_THRESH 0 +#define CMP_QUEUE_TIMER_THRESH 1 /* 1 ms */ + +#define RBDR_SIZE RBDR_SIZE0 +#define RCV_BUF_COUNT (1ULL << (RBDR_SIZE + 13)) +#define RBDR_THRESH (RCV_BUF_COUNT / 2) +#define DMA_BUFFER_LEN 2048 /* In multiples of 128bytes */ +#define RCV_FRAG_LEN DMA_BUFFER_LEN + +#define MAX_CQES_FOR_TX ((SND_QUEUE_LEN / MIN_SQ_DESC_PER_PKT_XMIT) *\ + MAX_CQE_PER_PKT_XMIT) +#define RQ_CQ_DROP ((CMP_QUEUE_LEN - MAX_CQES_FOR_TX) / 256) + +/* Descriptor size */ +#define SND_QUEUE_DESC_SIZE 16 /* 128 bits */ +#define CMP_QUEUE_DESC_SIZE 512 + +/* Buffer / descriptor alignments */ +#define NICVF_RCV_BUF_ALIGN 7 +#define NICVF_RCV_BUF_ALIGN_BYTES (1ULL << NICVF_RCV_BUF_ALIGN) +#define NICVF_CQ_BASE_ALIGN_BYTES 512 /* 9 bits */ +#define NICVF_SQ_BASE_ALIGN_BYTES 128 /* 7 bits */ + +#define NICVF_ALIGNED_ADDR(ADDR, ALIGN_BYTES) ALIGN(ADDR, ALIGN_BYTES) +#define NICVF_ADDR_ALIGN_LEN(ADDR, BYTES)\ + (NICVF_ALIGNED_ADDR(ADDR, BYTES) - BYTES) +#define NICVF_RCV_BUF_ALIGN_LEN(X)\ + (NICVF_ALIGNED_ADDR(X, NICVF_RCV_BUF_ALIGN_BYTES) - X) + +/* Queue enable/disable */ +#define NICVF_SQ_EN (1ULL << 19) + +/* Queue reset */ +#define NICVF_CQ_RESET (1ULL << 41) +#define NICVF_SQ_RESET (1ULL << 17) +#define NICVF_RBDR_RESET (1ULL << 43) + +enum CQ_RX_ERRLVL_E { + CQ_ERRLVL_MAC, + CQ_ERRLVL_L2, + CQ_ERRLVL_L3, + CQ_ERRLVL_L4, +}; + +enum CQ_RX_ERROP_E { + CQ_RX_ERROP_RE_NONE = 0x0, + CQ_RX_ERROP_RE_PARTIAL = 0x1, + CQ_RX_ERROP_RE_JABBER = 0x2, + CQ_RX_ERROP_RE_FCS = 0x7, + CQ_RX_ERROP_RE_TERMINATE = 0x9, + CQ_RX_ERROP_RE_RX_CTL = 0xb, + CQ_RX_ERROP_PREL2_ERR = 0x1f, + CQ_RX_ERROP_L2_FRAGMENT = 0x20, + CQ_RX_ERROP_L2_OVERRUN = 0x21, + CQ_RX_ERROP_L2_PFCS = 0x22, + CQ_RX_ERROP_L2_PUNY = 0x23, + CQ_RX_ERROP_L2_MAL = 0x24, + CQ_RX_ERROP_L2_OVERSIZE = 0x25, + CQ_RX_ERROP_L2_UNDERSIZE = 0x26, + CQ_RX_ERROP_L2_LENMISM = 0x27, + CQ_RX_ERROP_L2_PCLP = 0x28, + CQ_RX_ERROP_IP_NOT = 0x41, + CQ_RX_ERROP_IP_CSUM_ERR = 0x42, + CQ_RX_ERROP_IP_MAL = 0x43, + CQ_RX_ERROP_IP_MALD = 0x44, + CQ_RX_ERROP_IP_HOP = 0x45, + CQ_RX_ERROP_L3_ICRC = 0x46, + CQ_RX_ERROP_L3_PCLP = 0x47, + CQ_RX_ERROP_L4_MAL = 0x61, + CQ_RX_ERROP_L4_CHK = 0x62, + CQ_RX_ERROP_UDP_LEN = 0x63, + CQ_RX_ERROP_L4_PORT = 0x64, + CQ_RX_ERROP_TCP_FLAG = 0x65, + CQ_RX_ERROP_TCP_OFFSET = 0x66, + CQ_RX_ERROP_L4_PCLP = 0x67, + CQ_RX_ERROP_RBDR_TRUNC = 0x70, +}; + +enum CQ_TX_ERROP_E { + CQ_TX_ERROP_GOOD = 0x0, + CQ_TX_ERROP_DESC_FAULT = 0x10, + CQ_TX_ERROP_HDR_CONS_ERR = 0x11, + CQ_TX_ERROP_SUBDC_ERR = 0x12, + CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80, + CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81, + CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82, + CQ_TX_ERROP_LOCK_VIOL = 0x83, + CQ_TX_ERROP_DATA_FAULT = 0x84, + CQ_TX_ERROP_TSTMP_CONFLICT = 0x85, + CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86, + CQ_TX_ERROP_MEM_FAULT = 0x87, + CQ_TX_ERROP_CK_OVERLAP = 0x88, + CQ_TX_ERROP_CK_OFLOW = 0x89, + CQ_TX_ERROP_ENUM_LAST = 0x8a, +}; + +struct cmp_queue_stats { + struct rx_stats { + struct { + u64 mac_errs; + u64 l2_errs; + u64 l3_errs; + u64 l4_errs; + } errlvl; + struct { + u64 good; + u64 partial_pkts; + u64 jabber_errs; + u64 fcs_errs; + u64 terminate_errs; + u64 bgx_rx_errs; + u64 prel2_errs; + u64 l2_frags; + u64 l2_overruns; + u64 l2_pfcs; + u64 l2_puny; + u64 l2_hdr_malformed; + u64 l2_oversize; + u64 l2_undersize; + u64 l2_len_mismatch; + u64 l2_pclp; + u64 non_ip; + u64 ip_csum_err; + u64 ip_hdr_malformed; + u64 ip_payload_malformed; + u64 ip_hop_errs; + u64 l3_icrc_errs; + u64 l3_pclp; + u64 l4_malformed; + u64 l4_csum_errs; + u64 udp_len_err; + u64 bad_l4_port; + u64 bad_tcp_flag; + u64 tcp_offset_errs; + u64 l4_pclp; + u64 pkt_truncated; + } errop; + } rx; + struct tx_stats { + u64 good; + u64 desc_fault; + u64 hdr_cons_err; + u64 subdesc_err; + u64 imm_size_oflow; + u64 data_seq_err; + u64 mem_seq_err; + u64 lock_viol; + u64 data_fault; + u64 tstmp_conflict; + u64 tstmp_timeout; + u64 mem_fault; + u64 csum_overlap; + u64 csum_overflow; + } tx; +}; + +enum RQ_SQ_STATS { + RQ_SQ_STATS_OCTS, + RQ_SQ_STATS_PKTS, +}; + +struct rx_tx_queue_stats { + u64 bytes; + u64 pkts; +}; + +struct q_desc_mem { + uintptr_t dma; + uint64_t size; + uint16_t q_len; + uintptr_t phys_base; + void *base; + void *unalign_base; + bool allocated; +}; + +struct rbdr { + bool enable; + uint32_t dma_size; + uint32_t thresh; /* Threshold level for interrupt */ + void *desc; + uint32_t head; + uint32_t tail; + struct q_desc_mem dmem; + uintptr_t buf_mem; + uintptr_t buffers; +}; + +struct rcv_queue { + bool enable; + struct rbdr *rbdr_start; + struct rbdr *rbdr_cont; + bool en_tcp_reassembly; + uint8_t cq_qs; /* CQ's QS to which this RQ is assigned */ + uint8_t cq_idx; /* CQ index (0 to 7) in the QS */ + uint8_t cont_rbdr_qs; /* Continue buffer ptrs - QS num */ + uint8_t cont_qs_rbdr_idx; /* RBDR idx in the cont QS */ + uint8_t start_rbdr_qs; /* First buffer ptrs - QS num */ + uint8_t start_qs_rbdr_idx; /* RBDR idx in the above QS */ + uint8_t caching; + struct rx_tx_queue_stats stats; +}; + +struct cmp_queue { + bool enable; + uint16_t intr_timer_thresh; + uint16_t thresh; + void *desc; + struct q_desc_mem dmem; + struct cmp_queue_stats stats; +}; + +struct snd_queue { + bool enable; + uint8_t cq_qs; /* CQ's QS to which this SQ is pointing */ + uint8_t cq_idx; /* CQ index (0 to 7) in the above QS */ + uint16_t thresh; + uint32_t free_cnt; + uint32_t head; + uint32_t tail; + uint64_t *skbuff; + void *desc; + struct q_desc_mem dmem; + struct rx_tx_queue_stats stats; +}; + +struct queue_set { + bool enable; + bool be_en; + uint8_t vnic_id; + uint8_t rq_cnt; + uint8_t cq_cnt; + uint64_t cq_len; + uint8_t sq_cnt; + uint64_t sq_len; + uint8_t rbdr_cnt; + uint64_t rbdr_len; + struct rcv_queue rq[MAX_RCV_QUEUES_PER_QS]; + struct cmp_queue cq[MAX_CMP_QUEUES_PER_QS]; + struct snd_queue sq[MAX_SND_QUEUES_PER_QS]; + struct rbdr rbdr[MAX_RCV_BUF_DESC_RINGS_PER_QS]; +}; + +#define GET_RBDR_DESC(RING, idx)\ + (&(((struct rbdr_entry_t *)((RING)->desc))[idx])) +#define GET_SQ_DESC(RING, idx)\ + (&(((struct sq_hdr_subdesc *)((RING)->desc))[idx])) +#define GET_CQ_DESC(RING, idx)\ + (&(((union cq_desc_t *)((RING)->desc))[idx])) + +/* CQ status bits */ +#define CQ_WR_FULL (1 << 26) +#define CQ_WR_DISABLE (1 << 25) +#define CQ_WR_FAULT (1 << 24) +#define CQ_CQE_COUNT (0xFFFF << 0) + +#define CQ_ERR_MASK (CQ_WR_FULL | CQ_WR_DISABLE | CQ_WR_FAULT) + +int nicvf_set_qset_resources(struct nicvf *nic); +int nicvf_config_data_transfer(struct nicvf *nic, bool enable); +void nicvf_qset_config(struct nicvf *nic, bool enable); +void nicvf_cmp_queue_config(struct nicvf *nic, struct queue_set *qs, + int qidx, bool enable); + +void nicvf_sq_enable(struct nicvf *nic, struct snd_queue *sq, int qidx); +void nicvf_sq_disable(struct nicvf *nic, int qidx); +void nicvf_put_sq_desc(struct snd_queue *sq, int desc_cnt); +void nicvf_sq_free_used_descs(struct eth_device *netdev, + struct snd_queue *sq, int qidx); +int nicvf_sq_append_pkt(struct nicvf *nic, void *pkt, size_t pkt_len); + +void *nicvf_get_rcv_pkt(struct nicvf *nic, void *cq_desc, size_t *pkt_len); +void nicvf_refill_rbdr(struct nicvf *nic); + +void nicvf_enable_intr(struct nicvf *nic, int int_type, int q_idx); +void nicvf_disable_intr(struct nicvf *nic, int int_type, int q_idx); +void nicvf_clear_intr(struct nicvf *nic, int int_type, int q_idx); +int nicvf_is_intr_enabled(struct nicvf *nic, int int_type, int q_idx); + +/* Register access APIs */ +void nicvf_reg_write(struct nicvf *nic, uint64_t offset, uint64_t val); +uint64_t nicvf_reg_read(struct nicvf *nic, uint64_t offset); +void nicvf_qset_reg_write(struct nicvf *nic, uint64_t offset, uint64_t val); +uint64_t nicvf_qset_reg_read(struct nicvf *nic, uint64_t offset); +void nicvf_queue_reg_write(struct nicvf *nic, uint64_t offset, + uint64_t qidx, uint64_t val); +uint64_t nicvf_queue_reg_read(struct nicvf *nic, + uint64_t offset, uint64_t qidx); + +/* Stats */ +void nicvf_update_rq_stats(struct nicvf *nic, int rq_idx); +void nicvf_update_sq_stats(struct nicvf *nic, int sq_idx); +int nicvf_check_cqe_rx_errs(struct nicvf *nic, + struct cmp_queue *cq, void *cq_desc); +int nicvf_check_cqe_tx_errs(struct nicvf *nic, + struct cmp_queue *cq, void *cq_desc); +#endif /* NICVF_QUEUES_H */ diff --git a/drivers/net/cavium/q_struct.h b/drivers/net/cavium/q_struct.h new file mode 100644 index 0000000000..59b609024d --- /dev/null +++ b/drivers/net/cavium/q_struct.h @@ -0,0 +1,692 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#ifndef Q_STRUCT_H +#define Q_STRUCT_H + +/* Load transaction types for reading segment bytes specified by + * NIC_SEND_GATHER_S[LD_TYPE]. + */ +enum nic_send_ld_type_e { + NIC_SEND_LD_TYPE_E_LDD = 0x0, + NIC_SEND_LD_TYPE_E_LDT = 0x1, + NIC_SEND_LD_TYPE_E_LDWB = 0x2, + NIC_SEND_LD_TYPE_E_ENUM_LAST = 0x3, +}; + +enum ether_type_algorithm { + ETYPE_ALG_NONE = 0x0, + ETYPE_ALG_SKIP = 0x1, + ETYPE_ALG_ENDPARSE = 0x2, + ETYPE_ALG_VLAN = 0x3, + ETYPE_ALG_VLAN_STRIP = 0x4, +}; + +enum layer3_type { + L3TYPE_NONE = 0x00, + L3TYPE_GRH = 0x01, + L3TYPE_IPV4 = 0x04, + L3TYPE_IPV4_OPTIONS = 0x05, + L3TYPE_IPV6 = 0x06, + L3TYPE_IPV6_OPTIONS = 0x07, + L3TYPE_ET_STOP = 0x0D, + L3TYPE_OTHER = 0x0E, +}; + +enum layer4_type { + L4TYPE_NONE = 0x00, + L4TYPE_IPSEC_ESP = 0x01, + L4TYPE_IPFRAG = 0x02, + L4TYPE_IPCOMP = 0x03, + L4TYPE_TCP = 0x04, + L4TYPE_UDP = 0x05, + L4TYPE_SCTP = 0x06, + L4TYPE_GRE = 0x07, + L4TYPE_ROCE_BTH = 0x08, + L4TYPE_OTHER = 0x0E, +}; + +/* CPI and RSSI configuration */ +enum cpi_algorithm_type { + CPI_ALG_NONE = 0x0, + CPI_ALG_VLAN = 0x1, + CPI_ALG_VLAN16 = 0x2, + CPI_ALG_DIFF = 0x3, +}; + +enum rss_algorithm_type { + RSS_ALG_NONE = 0x00, + RSS_ALG_PORT = 0x01, + RSS_ALG_IP = 0x02, + RSS_ALG_TCP_IP = 0x03, + RSS_ALG_UDP_IP = 0x04, + RSS_ALG_SCTP_IP = 0x05, + RSS_ALG_GRE_IP = 0x06, + RSS_ALG_ROCE = 0x07, +}; + +enum rss_hash_cfg { + RSS_HASH_L2ETC = 0x00, + RSS_HASH_IP = 0x01, + RSS_HASH_TCP = 0x02, + RSS_HASH_TCP_SYN_DIS = 0x03, + RSS_HASH_UDP = 0x04, + RSS_HASH_L4ETC = 0x05, + RSS_HASH_ROCE = 0x06, + RSS_L3_BIDI = 0x07, + RSS_L4_BIDI = 0x08, +}; + +/* Completion queue entry types */ +enum cqe_type { + CQE_TYPE_INVALID = 0x0, + CQE_TYPE_RX = 0x2, + CQE_TYPE_RX_SPLIT = 0x3, + CQE_TYPE_RX_TCP = 0x4, + CQE_TYPE_SEND = 0x8, + CQE_TYPE_SEND_PTP = 0x9, +}; + +enum cqe_rx_tcp_status { + CQE_RX_STATUS_VALID_TCP_CNXT = 0x00, + CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F, +}; + +enum cqe_send_status { + CQE_SEND_STATUS_GOOD = 0x00, + CQE_SEND_STATUS_DESC_FAULT = 0x01, + CQE_SEND_STATUS_HDR_CONS_ERR = 0x11, + CQE_SEND_STATUS_SUBDESC_ERR = 0x12, + CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80, + CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81, + CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82, + CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83, + CQE_SEND_STATUS_LOCK_VIOL = 0x84, + CQE_SEND_STATUS_LOCK_UFLOW = 0x85, + CQE_SEND_STATUS_DATA_FAULT = 0x86, + CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87, + CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88, + CQE_SEND_STATUS_MEM_FAULT = 0x89, + CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A, + CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B, +}; + +enum cqe_rx_tcp_end_reason { + CQE_RX_TCP_END_FIN_FLAG_DET = 0, + CQE_RX_TCP_END_INVALID_FLAG = 1, + CQE_RX_TCP_END_TIMEOUT = 2, + CQE_RX_TCP_END_OUT_OF_SEQ = 3, + CQE_RX_TCP_END_PKT_ERR = 4, + CQE_RX_TCP_END_QS_DISABLED = 0x0F, +}; + +/* Packet protocol level error enumeration */ +enum cqe_rx_err_level { + CQE_RX_ERRLVL_RE = 0x0, + CQE_RX_ERRLVL_L2 = 0x1, + CQE_RX_ERRLVL_L3 = 0x2, + CQE_RX_ERRLVL_L4 = 0x3, +}; + +/* Packet protocol level error type enumeration */ +enum cqe_rx_err_opcode { + CQE_RX_ERR_RE_NONE = 0x0, + CQE_RX_ERR_RE_PARTIAL = 0x1, + CQE_RX_ERR_RE_JABBER = 0x2, + CQE_RX_ERR_RE_FCS = 0x7, + CQE_RX_ERR_RE_TERMINATE = 0x9, + CQE_RX_ERR_RE_RX_CTL = 0xb, + CQE_RX_ERR_PREL2_ERR = 0x1f, + CQE_RX_ERR_L2_FRAGMENT = 0x20, + CQE_RX_ERR_L2_OVERRUN = 0x21, + CQE_RX_ERR_L2_PFCS = 0x22, + CQE_RX_ERR_L2_PUNY = 0x23, + CQE_RX_ERR_L2_MAL = 0x24, + CQE_RX_ERR_L2_OVERSIZE = 0x25, + CQE_RX_ERR_L2_UNDERSIZE = 0x26, + CQE_RX_ERR_L2_LENMISM = 0x27, + CQE_RX_ERR_L2_PCLP = 0x28, + CQE_RX_ERR_IP_NOT = 0x41, + CQE_RX_ERR_IP_CHK = 0x42, + CQE_RX_ERR_IP_MAL = 0x43, + CQE_RX_ERR_IP_MALD = 0x44, + CQE_RX_ERR_IP_HOP = 0x45, + CQE_RX_ERR_L3_ICRC = 0x46, + CQE_RX_ERR_L3_PCLP = 0x47, + CQE_RX_ERR_L4_MAL = 0x61, + CQE_RX_ERR_L4_CHK = 0x62, + CQE_RX_ERR_UDP_LEN = 0x63, + CQE_RX_ERR_L4_PORT = 0x64, + CQE_RX_ERR_TCP_FLAG = 0x65, + CQE_RX_ERR_TCP_OFFSET = 0x66, + CQE_RX_ERR_L4_PCLP = 0x67, + CQE_RX_ERR_RBDR_TRUNC = 0x70, +}; + +struct cqe_rx_t { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 cqe_type:4; /* W0 */ + u64 stdn_fault:1; + u64 rsvd0:1; + u64 rq_qs:7; + u64 rq_idx:3; + u64 rsvd1:12; + u64 rss_alg:4; + u64 rsvd2:4; + u64 rb_cnt:4; + u64 vlan_found:1; + u64 vlan_stripped:1; + u64 vlan2_found:1; + u64 vlan2_stripped:1; + u64 l4_type:4; + u64 l3_type:4; + u64 l2_present:1; + u64 err_level:3; + u64 err_opcode:8; + + u64 pkt_len:16; /* W1 */ + u64 l2_ptr:8; + u64 l3_ptr:8; + u64 l4_ptr:8; + u64 cq_pkt_len:8; + u64 align_pad:3; + u64 rsvd3:1; + u64 chan:12; + + u64 rss_tag:32; /* W2 */ + u64 vlan_tci:16; + u64 vlan_ptr:8; + u64 vlan2_ptr:8; + + u64 rb3_sz:16; /* W3 */ + u64 rb2_sz:16; + u64 rb1_sz:16; + u64 rb0_sz:16; + + u64 rb7_sz:16; /* W4 */ + u64 rb6_sz:16; + u64 rb5_sz:16; + u64 rb4_sz:16; + + u64 rb11_sz:16; /* W5 */ + u64 rb10_sz:16; + u64 rb9_sz:16; + u64 rb8_sz:16; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 err_opcode:8; + u64 err_level:3; + u64 l2_present:1; + u64 l3_type:4; + u64 l4_type:4; + u64 vlan2_stripped:1; + u64 vlan2_found:1; + u64 vlan_stripped:1; + u64 vlan_found:1; + u64 rb_cnt:4; + u64 rsvd2:4; + u64 rss_alg:4; + u64 rsvd1:12; + u64 rq_idx:3; + u64 rq_qs:7; + u64 rsvd0:1; + u64 stdn_fault:1; + u64 cqe_type:4; /* W0 */ + u64 chan:12; + u64 rsvd3:1; + u64 align_pad:3; + u64 cq_pkt_len:8; + u64 l4_ptr:8; + u64 l3_ptr:8; + u64 l2_ptr:8; + u64 pkt_len:16; /* W1 */ + u64 vlan2_ptr:8; + u64 vlan_ptr:8; + u64 vlan_tci:16; + u64 rss_tag:32; /* W2 */ + u64 rb0_sz:16; + u64 rb1_sz:16; + u64 rb2_sz:16; + u64 rb3_sz:16; /* W3 */ + u64 rb4_sz:16; + u64 rb5_sz:16; + u64 rb6_sz:16; + u64 rb7_sz:16; /* W4 */ + u64 rb8_sz:16; + u64 rb9_sz:16; + u64 rb10_sz:16; + u64 rb11_sz:16; /* W5 */ +#endif + u64 rb0_ptr:64; + u64 rb1_ptr:64; + u64 rb2_ptr:64; + u64 rb3_ptr:64; + u64 rb4_ptr:64; + u64 rb5_ptr:64; + u64 rb6_ptr:64; + u64 rb7_ptr:64; + u64 rb8_ptr:64; + u64 rb9_ptr:64; + u64 rb10_ptr:64; + u64 rb11_ptr:64; +}; + +struct cqe_rx_tcp_err_t { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 cqe_type:4; /* W0 */ + u64 rsvd0:60; + + u64 rsvd1:4; /* W1 */ + u64 partial_first:1; + u64 rsvd2:27; + u64 rbdr_bytes:8; + u64 rsvd3:24; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 rsvd0:60; + u64 cqe_type:4; + + u64 rsvd3:24; + u64 rbdr_bytes:8; + u64 rsvd2:27; + u64 partial_first:1; + u64 rsvd1:4; +#endif +}; + +struct cqe_rx_tcp_t { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 cqe_type:4; /* W0 */ + u64 rsvd0:52; + u64 cq_tcp_status:8; + + u64 rsvd1:32; /* W1 */ + u64 tcp_cntx_bytes:8; + u64 rsvd2:8; + u64 tcp_err_bytes:16; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 cq_tcp_status:8; + u64 rsvd0:52; + u64 cqe_type:4; /* W0 */ + + u64 tcp_err_bytes:16; + u64 rsvd2:8; + u64 tcp_cntx_bytes:8; + u64 rsvd1:32; /* W1 */ +#endif +}; + +struct cqe_send_t { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 cqe_type:4; /* W0 */ + u64 rsvd0:4; + u64 sqe_ptr:16; + u64 rsvd1:4; + u64 rsvd2:10; + u64 sq_qs:7; + u64 sq_idx:3; + u64 rsvd3:8; + u64 send_status:8; + + u64 ptp_timestamp:64; /* W1 */ +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 send_status:8; + u64 rsvd3:8; + u64 sq_idx:3; + u64 sq_qs:7; + u64 rsvd2:10; + u64 rsvd1:4; + u64 sqe_ptr:16; + u64 rsvd0:4; + u64 cqe_type:4; /* W0 */ + + u64 ptp_timestamp:64; /* W1 */ +#endif +}; + +union cq_desc_t { + u64 u[64]; + struct cqe_send_t snd_hdr; + struct cqe_rx_t rx_hdr; + struct cqe_rx_tcp_t rx_tcp_hdr; + struct cqe_rx_tcp_err_t rx_tcp_err_hdr; +}; + +struct rbdr_entry_t { + u64 buf_addr; +}; + +/* TCP reassembly context */ +struct rbe_tcp_cnxt_t { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 tcp_pkt_cnt:12; + u64 rsvd1:4; + u64 align_hdr_bytes:4; + u64 align_ptr_bytes:4; + u64 ptr_bytes:16; + u64 rsvd2:24; + u64 cqe_type:4; + u64 rsvd0:54; + u64 tcp_end_reason:2; + u64 tcp_status:4; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 tcp_status:4; + u64 tcp_end_reason:2; + u64 rsvd0:54; + u64 cqe_type:4; + u64 rsvd2:24; + u64 ptr_bytes:16; + u64 align_ptr_bytes:4; + u64 align_hdr_bytes:4; + u64 rsvd1:4; + u64 tcp_pkt_cnt:12; +#endif +}; + +/* Always Big endian */ +struct rx_hdr_t { + u64 opaque:32; + u64 rss_flow:8; + u64 skip_length:6; + u64 disable_rss:1; + u64 disable_tcp_reassembly:1; + u64 nodrop:1; + u64 dest_alg:2; + u64 rsvd0:2; + u64 dest_rq:11; +}; + +enum send_l4_csum_type { + SEND_L4_CSUM_DISABLE = 0x00, + SEND_L4_CSUM_UDP = 0x01, + SEND_L4_CSUM_TCP = 0x02, + SEND_L4_CSUM_SCTP = 0x03, +}; + +enum send_crc_alg { + SEND_CRCALG_CRC32 = 0x00, + SEND_CRCALG_CRC32C = 0x01, + SEND_CRCALG_ICRC = 0x02, +}; + +enum send_load_type { + SEND_LD_TYPE_LDD = 0x00, + SEND_LD_TYPE_LDT = 0x01, + SEND_LD_TYPE_LDWB = 0x02, +}; + +enum send_mem_alg_type { + SEND_MEMALG_SET = 0x00, + SEND_MEMALG_ADD = 0x08, + SEND_MEMALG_SUB = 0x09, + SEND_MEMALG_ADDLEN = 0x0A, + SEND_MEMALG_SUBLEN = 0x0B, +}; + +enum send_mem_dsz_type { + SEND_MEMDSZ_B64 = 0x00, + SEND_MEMDSZ_B32 = 0x01, + SEND_MEMDSZ_B8 = 0x03, +}; + +enum sq_subdesc_type { + SQ_DESC_TYPE_INVALID = 0x00, + SQ_DESC_TYPE_HEADER = 0x01, + SQ_DESC_TYPE_CRC = 0x02, + SQ_DESC_TYPE_IMMEDIATE = 0x03, + SQ_DESC_TYPE_GATHER = 0x04, + SQ_DESC_TYPE_MEMORY = 0x05, +}; + +struct sq_crc_subdesc { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 rsvd1:32; + u64 crc_ival:32; + u64 subdesc_type:4; + u64 crc_alg:2; + u64 rsvd0:10; + u64 crc_insert_pos:16; + u64 hdr_start:16; + u64 crc_len:16; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 crc_len:16; + u64 hdr_start:16; + u64 crc_insert_pos:16; + u64 rsvd0:10; + u64 crc_alg:2; + u64 subdesc_type:4; + u64 crc_ival:32; + u64 rsvd1:32; +#endif +}; + +struct sq_gather_subdesc { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 subdesc_type:4; /* W0 */ + u64 ld_type:2; + u64 rsvd0:42; + u64 size:16; + + u64 rsvd1:15; /* W1 */ + u64 addr:49; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 size:16; + u64 rsvd0:42; + u64 ld_type:2; + u64 subdesc_type:4; /* W0 */ + + u64 addr:49; + u64 rsvd1:15; /* W1 */ +#endif +}; + +/* SQ immediate subdescriptor */ +struct sq_imm_subdesc { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 subdesc_type:4; /* W0 */ + u64 rsvd0:46; + u64 len:14; + + u64 data:64; /* W1 */ +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 len:14; + u64 rsvd0:46; + u64 subdesc_type:4; /* W0 */ + + u64 data:64; /* W1 */ +#endif +}; + +struct sq_mem_subdesc { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 subdesc_type:4; /* W0 */ + u64 mem_alg:4; + u64 mem_dsz:2; + u64 wmem:1; + u64 rsvd0:21; + u64 offset:32; + + u64 rsvd1:15; /* W1 */ + u64 addr:49; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 offset:32; + u64 rsvd0:21; + u64 wmem:1; + u64 mem_dsz:2; + u64 mem_alg:4; + u64 subdesc_type:4; /* W0 */ + + u64 addr:49; + u64 rsvd1:15; /* W1 */ +#endif +}; + +struct sq_hdr_subdesc { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 subdesc_type:4; + u64 tso:1; + u64 post_cqe:1; /* Post CQE on no error also */ + u64 dont_send:1; + u64 tstmp:1; + u64 subdesc_cnt:8; + u64 csum_l4:2; + u64 csum_l3:1; + u64 csum_inner_l4:2; + u64 csum_inner_l3:1; + u64 rsvd0:2; + u64 l4_offset:8; + u64 l3_offset:8; + u64 rsvd1:4; + u64 tot_len:20; /* W0 */ + + u64 rsvd2:24; + u64 inner_l4_offset:8; + u64 inner_l3_offset:8; + u64 tso_start:8; + u64 rsvd3:2; + u64 tso_max_paysize:14; /* W1 */ +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 tot_len:20; + u64 rsvd1:4; + u64 l3_offset:8; + u64 l4_offset:8; + u64 rsvd0:2; + u64 csum_inner_l3:1; + u64 csum_inner_l4:2; + u64 csum_l3:1; + u64 csum_l4:2; + u64 subdesc_cnt:8; + u64 tstmp:1; + u64 dont_send:1; + u64 post_cqe:1; /* Post CQE on no error also */ + u64 tso:1; + u64 subdesc_type:4; /* W0 */ + + u64 tso_max_paysize:14; + u64 rsvd3:2; + u64 tso_start:8; + u64 inner_l3_offset:8; + u64 inner_l4_offset:8; + u64 rsvd2:24; /* W1 */ +#endif +}; + +/* Queue config register formats */ +struct rq_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_2_63:62; + u64 ena:1; + u64 tcp_ena:1; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 tcp_ena:1; + u64 ena:1; + u64 reserved_2_63:62; +#endif +}; + +struct cq_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_43_63:21; + u64 ena:1; + u64 reset:1; + u64 caching:1; + u64 reserved_35_39:5; + u64 qsize:3; + u64 reserved_25_31:7; + u64 avg_con:9; + u64 reserved_0_15:16; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 reserved_0_15:16; + u64 avg_con:9; + u64 reserved_25_31:7; + u64 qsize:3; + u64 reserved_35_39:5; + u64 caching:1; + u64 reset:1; + u64 ena:1; + u64 reserved_43_63:21; +#endif +}; + +struct sq_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_32_63:32; + u64 cq_limit:8; + u64 reserved_20_23:4; + u64 ena:1; + u64 reserved_18_18:1; + u64 reset:1; + u64 ldwb:1; + u64 reserved_11_15:5; + u64 qsize:3; + u64 reserved_3_7:5; + u64 tstmp_bgx_intf:3; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 tstmp_bgx_intf:3; + u64 reserved_3_7:5; + u64 qsize:3; + u64 reserved_11_15:5; + u64 ldwb:1; + u64 reset:1; + u64 reserved_18_18:1; + u64 ena:1; + u64 reserved_20_23:4; + u64 cq_limit:8; + u64 reserved_32_63:32; +#endif +}; + +struct rbdr_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_45_63:19; + u64 ena:1; + u64 reset:1; + u64 ldwb:1; + u64 reserved_36_41:6; + u64 qsize:4; + u64 reserved_25_31:7; + u64 avg_con:9; + u64 reserved_12_15:4; + u64 lines:12; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 lines:12; + u64 reserved_12_15:4; + u64 avg_con:9; + u64 reserved_25_31:7; + u64 qsize:4; + u64 reserved_36_41:6; + u64 ldwb:1; + u64 reset:1; + u64 ena: 1; + u64 reserved_45_63:19; +#endif +}; + +struct qs_cfg { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 reserved_32_63:32; + u64 ena:1; + u64 reserved_27_30:4; + u64 sq_ins_ena:1; + u64 sq_ins_pos:6; + u64 lock_ena:1; + u64 lock_viol_cqe_ena:1; + u64 send_tstmp_ena:1; + u64 be:1; + u64 reserved_7_15:9; + u64 vnic:7; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + u64 vnic:7; + u64 reserved_7_15:9; + u64 be:1; + u64 send_tstmp_ena:1; + u64 lock_viol_cqe_ena:1; + u64 lock_ena:1; + u64 sq_ins_pos:6; + u64 sq_ins_ena:1; + u64 reserved_27_30:4; + u64 ena:1; + u64 reserved_32_63:32; +#endif +}; + +#endif /* Q_STRUCT_H */ diff --git a/drivers/net/cavium/thunder_bgx.c b/drivers/net/cavium/thunder_bgx.c new file mode 100644 index 0000000000..e296350158 --- /dev/null +++ b/drivers/net/cavium/thunder_bgx.c @@ -0,0 +1,1529 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <config.h> +#include <common.h> +#include <net.h> +#include <dm.h> +#include <pci.h> +#include <misc.h> +#include <netdev.h> +#include <malloc.h> +#include <miiphy.h> + +#include <asm/io.h> +#include <asm/arch-thunderx/thunderx.h> +#include <asm/arch-thunderx/thunderx_vnic.h> + +#include "nic_reg.h" +#include "nic.h" +#include "thunder_bgx.h" + +/* get the physical address of a CSR on a node */ +#define CSR_PA(node, csr) ((csr) | ((u64)(node) << 44)) + +/* Registers */ +#define GSERX_CFG(x) (0x000087E090000080ull + (x) * 0x1000000ull) +#define GSERX_SCRATCH(x) (0x000087E090000020ull + (x) * 0x1000000ull) +#define GSERX_PHY_CTL(x) (0x000087E090000000ull + (x) * 0x1000000ull) +#define GSERX_CFG_BGX BIT_ULL(2) +#define GSER_RX_EIE_DETSTS(x) (0x000087e090000150ull + (x) * 0x1000000ull) +#define GSER_CDRLOCK (8) +#define GSER_BR_RXX_CTL(x,y) (0x000087e090000400ull + (x) * 0x1000000ull + (y) * 0x80) +#define GSER_BR_RXX_CTL_RXT_SWM BIT_ULL(2) +#define GSER_BR_RXX_EER(x,y) (0x000087e090000418ull + (x) * 0x1000000ull + (y) * 0x80) +#define GSER_BR_RXX_EER_RXT_ESV BIT_ULL(14) +#define GSER_BR_RXX_EER_RXT_EER BIT_ULL(15) +#define EER_RXT_ESV (14) + +static const phy_interface_t if_mode[] = { + [QLM_MODE_SGMII] = PHY_INTERFACE_MODE_SGMII, + [QLM_MODE_RGMII] = PHY_INTERFACE_MODE_RGMII, + [QLM_MODE_QSGMII] = PHY_INTERFACE_MODE_QSGMII, + [QLM_MODE_XAUI] = PHY_INTERFACE_MODE_XAUI, + [QLM_MODE_RXAUI] = PHY_INTERFACE_MODE_RXAUI, +}; + +struct lmac { + struct bgx *bgx; + int dmac; + u8 mac[ETH_ALEN]; + u8 lmac_type; + u8 lane_to_sds; + bool use_training; + bool autoneg; + bool link_up; + int lmacid; /* ID within BGX */ + int phy_addr; /* ID on board */ + struct eth_device netdev; + struct mii_dev *mii_bus; + struct phy_device *phydev; + unsigned int last_duplex; + unsigned int last_link; + unsigned int last_speed; + u8 qlm_mode; + int qlm; +}; + +struct bgx { + u8 bgx_id; + int node; + struct lmac lmac[MAX_LMAC_PER_BGX]; + u8 lmac_count; + u8 max_lmac; + void __iomem *reg_base; + struct pci_dev *pdev; + bool is_rgx; +}; + +struct bgx_board_info bgx_board_info[CONFIG_MAX_BGX]; + +struct bgx *bgx_vnic[CONFIG_MAX_BGX]; +bool is_altpkg = 0; +extern int __cavm_if_phy_xs_init(struct mii_dev *bus, int phy_addr); + +/* The Cavium ThunderX network controller can *only* be found in SoCs + * containing the ThunderX ARM64 CPU implementation. All accesses to the device + * registers on this platform are implicitly strongly ordered with respect + * to memory accesses. So writeq_relaxed() and readq_relaxed() are safe to use + * with no memory barriers in this driver. The readq()/writeq() functions add + * explicit ordering operation which in this case are redundant, and only + * add overhead. + */ + +/* Register read/write APIs */ +static u64 bgx_reg_read(struct bgx *bgx, u8 lmac, u64 offset) +{ + u64 addr = (uintptr_t)bgx->reg_base + ((u32)lmac << 20) + offset; + + return readq((void *)addr); +} + +static void bgx_reg_write(struct bgx *bgx, u8 lmac, u64 offset, u64 val) +{ + u64 addr = (uintptr_t)bgx->reg_base + ((u32)lmac << 20) + offset; + + writeq(val, (void *)addr); +} + +static void bgx_reg_modify(struct bgx *bgx, u8 lmac, u64 offset, u64 val) +{ + u64 addr = (uintptr_t)bgx->reg_base + ((u32)lmac << 20) + offset; + + writeq(val | bgx_reg_read(bgx, lmac, offset), (void *)addr); +} + +static int bgx_poll_reg(struct bgx *bgx, u8 lmac, u64 reg, u64 mask, bool zero) +{ + int timeout = 100; + u64 reg_val; + + while (timeout) { + reg_val = bgx_reg_read(bgx, lmac, reg); + if (zero && !(reg_val & mask)) + return 0; + if (!zero && (reg_val & mask)) + return 0; + mdelay(1); + timeout--; + } + return 1; +} + +static int gser_poll_reg(u64 reg, int bit, u64 mask, u64 expected_val, int timeout) +{ + u64 reg_val; + debug("gser_poll_reg: reg = %#llx, mask = %#llx, expected_val = %#llx, bit = %d\n", + reg, mask, expected_val, bit); + while (timeout) { + reg_val = readq(CSR_PA(0, reg)) >> bit; + if ((reg_val & mask) == (expected_val)) + return 0; + mdelay(1); + timeout--; + } + return 1; +} + +static bool is_bgx_port_valid(int bgx, int lmac) +{ + debug("is_bgx_port_valid bgx %d lmac %d valid %d\n", + bgx, lmac, bgx_board_info[bgx].lmac_reg[lmac]); + + if (bgx_board_info[bgx].lmac_reg[lmac]) + return 1; + else + return 0; +} + +struct lmac *bgx_get_lmac(int node, int bgx_idx, int lmacid) +{ + struct bgx *bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + + if (bgx) + return &bgx->lmac[lmacid]; + + return NULL; +} + +const u8 *bgx_get_lmac_mac(int node, int bgx_idx, int lmacid) +{ + struct bgx *bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + + if (bgx) + return bgx->lmac[lmacid].mac; + + return NULL; +} + +void bgx_set_lmac_mac(int node, int bgx_idx, int lmacid, const u8 *mac) +{ + struct bgx *bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + + if (!bgx) + return; + + memcpy(bgx->lmac[lmacid].mac, mac, 6); +} + +/* Return number of BGX present in HW */ +void bgx_get_count(int node, int *bgx_count) +{ + int i; + struct bgx *bgx; + + *bgx_count = 0; + for (i = 0; i < CONFIG_MAX_BGX_PER_NODE; i++) { + bgx = bgx_vnic[node * CONFIG_MAX_BGX_PER_NODE + i]; + debug("bgx_vnic[%u]: %p\n", node * CONFIG_MAX_BGX_PER_NODE + i, bgx); + if (bgx) + *bgx_count |= (1 << i); + } +} + +/* Return number of LMAC configured for this BGX */ +int bgx_get_lmac_count(int node, int bgx_idx) +{ + struct bgx *bgx; + + bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + if (bgx) + return bgx->lmac_count; + + return 0; +} + +void bgx_lmac_rx_tx_enable(int node, int bgx_idx, int lmacid, bool enable) +{ + struct bgx *bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + u64 cfg; + + if (!bgx) + return; + + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + if (enable) + cfg |= CMR_PKT_RX_EN | CMR_PKT_TX_EN; + else + cfg &= ~(CMR_PKT_RX_EN | CMR_PKT_TX_EN); + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); +} + +static void bgx_flush_dmac_addrs(struct bgx *bgx, u64 lmac) +{ + u64 dmac = 0x00; + u64 offset, addr; + + while (bgx->lmac[lmac].dmac > 0) { + offset = ((bgx->lmac[lmac].dmac - 1) * sizeof(dmac)) + + (lmac * MAX_DMAC_PER_LMAC * sizeof(dmac)); + addr = (uintptr_t)bgx->reg_base + + BGX_CMR_RX_DMACX_CAM + offset; + writeq(dmac, (void *)addr); + bgx->lmac[lmac].dmac--; + } +} + +/* Configure BGX LMAC in internal loopback mode */ +void bgx_lmac_internal_loopback(int node, int bgx_idx, + int lmac_idx, bool enable) +{ + struct bgx *bgx; + struct lmac *lmac; + u64 cfg; + + bgx = bgx_vnic[(node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx]; + if (!bgx) + return; + + lmac = &bgx->lmac[lmac_idx]; + if (lmac->qlm_mode == QLM_MODE_SGMII) { + cfg = bgx_reg_read(bgx, lmac_idx, BGX_GMP_PCS_MRX_CTL); + if (enable) + cfg |= PCS_MRX_CTL_LOOPBACK1; + else + cfg &= ~PCS_MRX_CTL_LOOPBACK1; + bgx_reg_write(bgx, lmac_idx, BGX_GMP_PCS_MRX_CTL, cfg); + } else { + cfg = bgx_reg_read(bgx, lmac_idx, BGX_SPUX_CONTROL1); + if (enable) + cfg |= SPU_CTL_LOOPBACK; + else + cfg &= ~SPU_CTL_LOOPBACK; + bgx_reg_write(bgx, lmac_idx, BGX_SPUX_CONTROL1, cfg); + } +} + +/* Return the DLM used for the BGX */ +static int get_qlm_for_bgx(int node, int bgx_id, int index) +{ + int qlm = 0; + u64 cfg; + + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX)) { + qlm = (bgx_id) ? 2 : 0; + qlm += (index >= 2) ? 1 : 0; + } else if (CAVIUM_IS_MODEL(CAVIUM_CN83XX)) { + switch (bgx_id) { + case 0: + qlm = 2; + break; + case 1: + qlm = 3; + break; + case 2: + if (index >= 2) + qlm = 6; + else + qlm = 5; + break; + case 3: + qlm = 4; + break; + } + } + + cfg = readq(CSR_PA(node, GSERX_CFG(qlm))) & GSERX_CFG_BGX; + debug("get_qlm_for_bgx:qlm%d: cfg = %lld\n", qlm, cfg); + + /* Check if DLM is configured as BGX# */ + if (cfg) { + if (readq(CSR_PA(node, GSERX_PHY_CTL(qlm)))) + return -1; + return qlm; + } + return -1; +} + +static int bgx_lmac_sgmii_init(struct bgx *bgx, int lmacid) +{ + u64 cfg; + struct lmac *lmac; + + lmac = &bgx->lmac[lmacid]; + + debug("bgx_lmac_sgmii_init: bgx_id = %d, lmacid = %d\n", bgx->bgx_id, lmacid); + + bgx_reg_modify(bgx, lmacid, BGX_GMP_GMI_TXX_THRESH, 0x30); + /* max packet size */ + bgx_reg_modify(bgx, lmacid, BGX_GMP_GMI_RXX_JABBER, MAX_FRAME_SIZE); + + /* Disable frame alignment if using preamble */ + cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_GMI_TXX_APPEND); + if (cfg & 1) + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_SGMII_CTL, 0); + + /* Enable lmac */ + bgx_reg_modify(bgx, lmacid, BGX_CMRX_CFG, CMR_EN); + + /* PCS reset */ + bgx_reg_modify(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, PCS_MRX_CTL_RESET); + if (bgx_poll_reg(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, + PCS_MRX_CTL_RESET, true)) { + printf("BGX PCS reset not completed\n"); + return -1; + } + + /* power down, reset autoneg, autoneg enable */ + cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_PCS_MRX_CTL); + cfg &= ~PCS_MRX_CTL_PWR_DN; + + if (bgx_board_info[bgx->bgx_id].phy_info[lmacid].autoneg_dis) + cfg |= (PCS_MRX_CTL_RST_AN); + else + cfg |= (PCS_MRX_CTL_RST_AN | PCS_MRX_CTL_AN_EN); + bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, cfg); + + /* Disable disparity for QSGMII mode, to prevent propogation across + ports. */ + + if (lmac->qlm_mode == QLM_MODE_QSGMII) { + cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_PCS_MISCX_CTL); + cfg &= ~PCS_MISC_CTL_DISP_EN; + bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MISCX_CTL, cfg); + return 0; /* Skip checking AN_CPT */ + } + + if (lmac->qlm_mode == QLM_MODE_SGMII) { + if (bgx_poll_reg(bgx, lmacid, BGX_GMP_PCS_MRX_STATUS, + PCS_MRX_STATUS_AN_CPT, false)) { + printf("BGX AN_CPT not completed\n"); + return -1; + } + } + + return 0; +} + +static int bgx_lmac_sgmii_set_link_speed(struct lmac *lmac) +{ + u64 prtx_cfg; + u64 pcs_miscx_ctl; + u64 cfg; + struct bgx *bgx = lmac->bgx; + unsigned int lmacid = lmac->lmacid; + + debug("bgx_lmac_sgmii_set_link_speed(): lmacid %d\n", lmac->lmacid); + + /* Disable LMAC before setting up speed */ + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + cfg &= ~CMR_EN; + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); + + /* Read GMX CFG */ + prtx_cfg = bgx_reg_read(bgx, lmacid, + BGX_GMP_GMI_PRTX_CFG); + /* Read PCS MISCS CTL */ + pcs_miscx_ctl = bgx_reg_read(bgx, lmacid, + BGX_GMP_PCS_MISCX_CTL); + + /* Use GMXENO to force the link down*/ + if (lmac->link_up) { + pcs_miscx_ctl &= ~PCS_MISC_CTL_GMX_ENO; + /* change the duplex setting if the link is up */ + prtx_cfg |= GMI_PORT_CFG_DUPLEX; + } else + pcs_miscx_ctl |= PCS_MISC_CTL_GMX_ENO; + + /* speed based setting for GMX */ + switch( lmac->last_speed) { + case 10: + prtx_cfg &= ~GMI_PORT_CFG_SPEED; + prtx_cfg |= GMI_PORT_CFG_SPEED_MSB; + prtx_cfg &= ~GMI_PORT_CFG_SLOT_TIME; + pcs_miscx_ctl |= 50; /* sampling point */ + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_SLOT, 0x40); + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_BURST, 0); + break; + case 100: + prtx_cfg &= ~GMI_PORT_CFG_SPEED; + prtx_cfg &= ~GMI_PORT_CFG_SPEED_MSB; + prtx_cfg &= ~GMI_PORT_CFG_SLOT_TIME; + pcs_miscx_ctl |= 0x5; /* sampling point */ + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_SLOT, 0x40); + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_BURST, 0); + break; + case 1000: + prtx_cfg |= GMI_PORT_CFG_SPEED; + prtx_cfg &= ~GMI_PORT_CFG_SPEED_MSB; + prtx_cfg |= GMI_PORT_CFG_SLOT_TIME; + pcs_miscx_ctl |= 0x1; /* sampling point */ + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_SLOT, 0x200); + if (lmac->last_duplex) + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_BURST, 0); + else /* half duplex */ + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_BURST, 0x2000); + break; + default: + break; + } + + /* write back the new PCS misc and GMX settings */ + bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MISCX_CTL, pcs_miscx_ctl); + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_PRTX_CFG, prtx_cfg); + + /* read back GMX CFG again to check config completion */ + bgx_reg_read(bgx, lmacid, BGX_GMP_GMI_PRTX_CFG); + + /* enable BGX back */ + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + cfg |= CMR_EN; + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); + + return 0; +} + +static int bgx_lmac_xaui_init(struct bgx *bgx, int lmacid, int lmac_type) +{ + u64 cfg; + struct lmac *lmac; + + lmac = &bgx->lmac[lmacid]; + + /* Reset SPU */ + bgx_reg_modify(bgx, lmacid, BGX_SPUX_CONTROL1, SPU_CTL_RESET); + if (bgx_poll_reg(bgx, lmacid, BGX_SPUX_CONTROL1, SPU_CTL_RESET, true)) { + printf("BGX SPU reset not completed\n"); + return -1; + } + + /* Disable LMAC */ + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + cfg &= ~CMR_EN; + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); + + bgx_reg_modify(bgx, lmacid, BGX_SPUX_CONTROL1, SPU_CTL_LOW_POWER); + /* Set interleaved running disparity for RXAUI */ + if (lmac->qlm_mode != QLM_MODE_RXAUI) + bgx_reg_modify(bgx, lmacid, + BGX_SPUX_MISC_CONTROL, SPU_MISC_CTL_RX_DIS); + else + bgx_reg_modify(bgx, lmacid, BGX_SPUX_MISC_CONTROL, + SPU_MISC_CTL_RX_DIS | SPU_MISC_CTL_INTLV_RDISP); + + /* clear all interrupts */ + cfg = bgx_reg_read(bgx, lmacid, BGX_SMUX_RX_INT); + bgx_reg_write(bgx, lmacid, BGX_SMUX_RX_INT, cfg); + cfg = bgx_reg_read(bgx, lmacid, BGX_SMUX_TX_INT); + bgx_reg_write(bgx, lmacid, BGX_SMUX_TX_INT, cfg); + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_INT); + bgx_reg_write(bgx, lmacid, BGX_SPUX_INT, cfg); + + if (lmac->use_training) { + bgx_reg_write(bgx, lmacid, BGX_SPUX_BR_PMD_LP_CUP, 0x00); + bgx_reg_write(bgx, lmacid, BGX_SPUX_BR_PMD_LD_CUP, 0x00); + bgx_reg_write(bgx, lmacid, BGX_SPUX_BR_PMD_LD_REP, 0x00); + /* training enable */ + bgx_reg_modify(bgx, lmacid, + BGX_SPUX_BR_PMD_CRTL, SPU_PMD_CRTL_TRAIN_EN); + } + + /* Append FCS to each packet */ + bgx_reg_modify(bgx, lmacid, BGX_SMUX_TX_APPEND, SMU_TX_APPEND_FCS_D); + + /* Disable forward error correction */ + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_FEC_CONTROL); + cfg &= ~SPU_FEC_CTL_FEC_EN; + bgx_reg_write(bgx, lmacid, BGX_SPUX_FEC_CONTROL, cfg); + + /* Disable autoneg */ + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_AN_CONTROL); + cfg = cfg & ~(SPU_AN_CTL_XNP_EN); + if (lmac->use_training) + cfg = cfg | (SPU_AN_CTL_AN_EN); + else + cfg = cfg & ~(SPU_AN_CTL_AN_EN); + bgx_reg_write(bgx, lmacid, BGX_SPUX_AN_CONTROL, cfg); + + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_AN_ADV); + /* Clear all KR bits, configure according to the mode */ + cfg &= ~((0xfULL << 22) | (1ULL << 12)); + if (lmac->qlm_mode == QLM_MODE_10G_KR) + cfg |= (1 << 23); + else if (lmac->qlm_mode == QLM_MODE_40G_KR4) + cfg |= (1 << 24); + bgx_reg_write(bgx, lmacid, BGX_SPUX_AN_ADV, cfg); + + cfg = bgx_reg_read(bgx, 0, BGX_SPU_DBG_CONTROL); + if (lmac->use_training) + cfg |= SPU_DBG_CTL_AN_ARB_LINK_CHK_EN; + else + cfg &= ~SPU_DBG_CTL_AN_ARB_LINK_CHK_EN; + bgx_reg_write(bgx, 0, BGX_SPU_DBG_CONTROL, cfg); + + /* Enable lmac */ + bgx_reg_modify(bgx, lmacid, BGX_CMRX_CFG, CMR_EN); + + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_CONTROL1); + cfg &= ~SPU_CTL_LOW_POWER; + bgx_reg_write(bgx, lmacid, BGX_SPUX_CONTROL1, cfg); + + cfg = bgx_reg_read(bgx, lmacid, BGX_SMUX_TX_CTL); + cfg &= ~SMU_TX_CTL_UNI_EN; + cfg |= SMU_TX_CTL_DIC_EN; + bgx_reg_write(bgx, lmacid, BGX_SMUX_TX_CTL, cfg); + + /* take lmac_count into account */ + bgx_reg_modify(bgx, lmacid, BGX_SMUX_TX_THRESH, (0x100 - 1)); + /* max packet size */ + bgx_reg_modify(bgx, lmacid, BGX_SMUX_RX_JABBER, MAX_FRAME_SIZE); + + debug("xaui_init: lmacid = %d, qlm = %d, qlm_mode = %d\n", + lmacid, lmac->qlm, lmac->qlm_mode); + /* RXAUI with Marvell PHY requires some tweaking */ + if (lmac->qlm_mode == QLM_MODE_RXAUI) { + char mii_name[20]; + snprintf(mii_name, sizeof(mii_name), "smi%d", + bgx_board_info[bgx->bgx_id].phy_info[lmacid].mdio_bus); + + debug("mii_name: %s\n", mii_name); + lmac->mii_bus = miiphy_get_dev_by_name(mii_name); + lmac->phy_addr = bgx_board_info[bgx->bgx_id]. + phy_info[lmacid].phy_addr; + __cavm_if_phy_xs_init(lmac->mii_bus, lmac->phy_addr); + } + + return 0; +} + +/* Get max number of lanes present in a given QLM/DLM */ +static int get_qlm_lanes(int qlm) +{ + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX)) + return 2; + else if (CAVIUM_IS_MODEL(CAVIUM_CN83XX)) + return (qlm >= 5) ? 2 : 4; + else + return -1; +} + +int __rx_equalization(int qlm, int lane) +{ + int max_lanes = get_qlm_lanes(qlm); + int l; + int fail = 0; + + /* Before completing Rx equalization wait for GSERx_RX_EIE_DETSTS[CDRLOCK] to be set + This ensures the rx data is valid */ + if (lane == -1) { + if (gser_poll_reg(GSER_RX_EIE_DETSTS(qlm), GSER_CDRLOCK, 0xf, (1 << max_lanes) - 1, 100)) { + debug("ERROR: DLM%d: CDR Lock not detected for 2 lanes\n", qlm); + return -1; + } + } else { + if (gser_poll_reg(GSER_RX_EIE_DETSTS(qlm), GSER_CDRLOCK, (0xf & (1 << lane)), (1 << lane), 100)) { + debug("ERROR: DLM%d: CDR Lock not detected on %d lane\n", qlm, lane); + return -1; + } + } + + for (l = 0; l < max_lanes; l++) { + u64 rctl, reer; + + if ((lane != -1) && (lane != l)) + continue; + + /* Enable software control */ + rctl = readq(CSR_PA(0, GSER_BR_RXX_CTL(qlm, l))); + rctl |= GSER_BR_RXX_CTL_RXT_SWM; + writeq(rctl, CSR_PA(0, GSER_BR_RXX_CTL(qlm, l))); + + /* Clear the completion flag and initiate a new request */ + reer = readq(CSR_PA(0, GSER_BR_RXX_EER(qlm, l))); + reer &= ~GSER_BR_RXX_EER_RXT_ESV; + reer |= GSER_BR_RXX_EER_RXT_EER; + writeq(reer, CSR_PA(0, GSER_BR_RXX_EER(qlm, l))); + } + + /* Wait for RX equalization to complete */ + for (l = 0; l < max_lanes; l++) { + u64 rctl, reer; + + if ((lane != -1) && (lane != l)) + continue; + + gser_poll_reg(GSER_BR_RXX_EER(qlm, l), EER_RXT_ESV, 1, 1, 200); + reer = readq(CSR_PA(0, GSER_BR_RXX_EER(qlm, l))); + + /* Switch back to hardware control */ + rctl = readq(CSR_PA(0, GSER_BR_RXX_CTL(qlm, l))); + rctl &= ~GSER_BR_RXX_CTL_RXT_SWM; + writeq(rctl, CSR_PA(0, GSER_BR_RXX_CTL(qlm, l))); + + if (reer & GSER_BR_RXX_EER_RXT_ESV) { + debug("Rx equalization completed on DLM%d lane%d, rxt_esm = 0x%llx\n", + qlm, l, (reer & 0x3fff)); + } else { + debug("Rx equalization timedout on DLM%d lane%d\n", qlm, l); + fail = 1; + } + } + + return (fail) ? -1 : 0; +} + +static int bgx_xaui_check_link(struct lmac *lmac) +{ + struct bgx *bgx = lmac->bgx; + int lmacid = lmac->lmacid; + int lmac_type = lmac->lmac_type; + u64 cfg; + + bgx_reg_modify(bgx, lmacid, BGX_SPUX_MISC_CONTROL, SPU_MISC_CTL_RX_DIS); + + /* check if auto negotiation is complete */ + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_AN_CONTROL); + if (cfg & SPU_AN_CTL_AN_EN) { + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_AN_STATUS); + if (!(cfg & SPU_AN_STS_AN_COMPLETE)) { + /* Restart autonegotiation */ + debug("restarting auto-neg\n"); + bgx_reg_modify(bgx, lmacid, BGX_SPUX_AN_CONTROL, SPU_AN_CTL_AN_RESTART); + return -1; + } + } + + debug("%s link use_training %d\n",__func__, lmac->use_training); + if (lmac->use_training) { + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_INT); + if (!(cfg & (1ull << 13))) { + debug("waiting for link training\n"); + /* Clear the training interrupts (W1C) */ + cfg = (1ull << 13) | (1ull << 14); + bgx_reg_write(bgx, lmacid, BGX_SPUX_INT, cfg); + + udelay(2000); + /* Restart training */ + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_BR_PMD_CRTL); + cfg |= (1ull << 0); + bgx_reg_write(bgx, lmacid, BGX_SPUX_BR_PMD_CRTL, cfg); + return -1; + } + } + + /* Perform RX Equalization. Applies to non-KR interfaces for speeds + >= 6.25Gbps. */ + if (!lmac->use_training) { + int qlm; + bool use_dlm = 0; + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX) + || (CAVIUM_IS_MODEL(CAVIUM_CN83XX) && (bgx->bgx_id == 2))) + use_dlm = 1; + switch (lmac->lmac_type) { + default: + case BGX_MODE_SGMII: + case BGX_MODE_RGMII: + case BGX_MODE_XAUI: + /* Nothing to do */ + break; + case BGX_MODE_XLAUI: + if (use_dlm) { + if (__rx_equalization(lmac->qlm, -1) || + __rx_equalization(lmac->qlm+1, -1)) { + printf("BGX%d:%d: Waiting for RX Equalization on DLM%d/DLM%d\n", + bgx->bgx_id, lmacid, lmac->qlm, lmac->qlm+1); + return -1; + } + } else { + if (__rx_equalization(lmac->qlm, -1)) { + printf("BGX%d:%d: Waiting for RX Equalization on QLM%d:\n", + bgx->bgx_id, lmacid, lmac->qlm); + return -1; + } + } + break; + case BGX_MODE_RXAUI: + /* RXAUI0 uses LMAC0:QLM0/QLM2 and RXAUI1 uses LMAC1:QLM1/QLM3 + RXAUI requires 2 lanes for each interface */ + qlm = lmac->qlm; + if (__rx_equalization(qlm, 0)) { + printf("BGX%d:%d: Waiting for RX Equalization on QLM%d, Lane0\n", + bgx->bgx_id, lmacid, qlm); + return -1; + } + if (__rx_equalization(qlm, 1)) { + printf("BGX%d:%d: Waiting for RX Equalization on QLM%d, Lane1\n", + bgx->bgx_id, lmacid, qlm); + return -1; + } + break; + case BGX_MODE_XFI: + { + int lid; + if ((bgx->bgx_id == 0) && is_altpkg && lmacid) + lid = 0; + else if ((lmacid >= 2) && use_dlm) + lid = lmacid - 2; + else + lid = lmacid; + + if (__rx_equalization(lmac->qlm, lid)) + printf("BGX%d:%d: Waiting for RX Equalization on QLM%d\n", + bgx->bgx_id, lid, lmac->qlm); + } + break; + } + } + + /* wait for PCS to come out of reset */ + if (bgx_poll_reg(bgx, lmacid, BGX_SPUX_CONTROL1, SPU_CTL_RESET, true)) { + printf("BGX SPU reset not completed\n"); + return -1; + } + + if ((lmac_type == 3) || (lmac_type == 4)) { + if (bgx_poll_reg(bgx, lmacid, BGX_SPUX_BR_STATUS1, + SPU_BR_STATUS_BLK_LOCK, false)) { + printf("SPU_BR_STATUS_BLK_LOCK not completed\n"); + return -1; + } + } else { + if (bgx_poll_reg(bgx, lmacid, BGX_SPUX_BX_STATUS, + SPU_BX_STATUS_RX_ALIGN, false)) { + printf("SPU_BX_STATUS_RX_ALIGN not completed\n"); + return -1; + } + } + + /* Clear rcvflt bit (latching high) and read it back */ + bgx_reg_modify(bgx, lmacid, BGX_SPUX_STATUS2, SPU_STATUS2_RCVFLT); + if (bgx_reg_read(bgx, lmacid, BGX_SPUX_STATUS2) & SPU_STATUS2_RCVFLT) { + printf("Receive fault, retry training\n"); + if (lmac->use_training) { + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_INT); + if (!(cfg & (1ull << 13))) { + cfg = (1ull << 13) | (1ull << 14); + bgx_reg_write(bgx, lmacid, BGX_SPUX_INT, cfg); + cfg = bgx_reg_read(bgx, lmacid, + BGX_SPUX_BR_PMD_CRTL); + cfg |= (1ull << 0); + bgx_reg_write(bgx, lmacid, + BGX_SPUX_BR_PMD_CRTL, cfg); + return -1; + } + } + return -1; + } + + /* Wait for MAC RX to be ready */ + if (bgx_poll_reg(bgx, lmacid, BGX_SMUX_RX_CTL, + SMU_RX_CTL_STATUS, true)) { + printf( "SMU RX link not okay\n"); + return -1; + } + + /* Wait for BGX RX to be idle */ + if (bgx_poll_reg(bgx, lmacid, BGX_SMUX_CTL, SMU_CTL_RX_IDLE, false)) { + printf("SMU RX not idle\n"); + return -1; + } + + /* Wait for BGX TX to be idle */ + if (bgx_poll_reg(bgx, lmacid, BGX_SMUX_CTL, SMU_CTL_TX_IDLE, false)) { + printf("SMU TX not idle\n"); + return -1; + } + + if (bgx_reg_read(bgx, lmacid, BGX_SPUX_STATUS2) & SPU_STATUS2_RCVFLT) { + printf("Receive fault\n"); + return -1; + } + + /* Receive link is latching low. Force it high and verify it */ + if (!(bgx_reg_read(bgx, lmacid, BGX_SPUX_STATUS1) & SPU_STATUS1_RCV_LNK)) + bgx_reg_modify(bgx, lmacid, BGX_SPUX_STATUS1, SPU_STATUS1_RCV_LNK); + if (bgx_poll_reg(bgx, lmacid, BGX_SPUX_STATUS1, + SPU_STATUS1_RCV_LNK, false)) { + printf("SPU receive link down\n"); + return -1; + } + + cfg = bgx_reg_read(bgx, lmacid, BGX_SPUX_MISC_CONTROL); + cfg &= ~SPU_MISC_CTL_RX_DIS; + bgx_reg_write(bgx, lmacid, BGX_SPUX_MISC_CONTROL, cfg); + return 0; +} + +int bgx_poll_for_link(int node, int bgx_idx, int lmacid) +{ + int ret; + struct lmac *lmac = bgx_get_lmac(node, bgx_idx, lmacid); + char mii_name[10]; + + if (lmac == NULL) { + printf("LMAC %d/%d/%d is disabled or doesn't exist\n", + node, bgx_idx, lmacid); + return 0; + } + + debug("%s: %d, lmac: %d/%d/%d %p\n", + __FILE__, __LINE__, + node, bgx_idx, lmacid, lmac); + + if ((lmac->qlm_mode == QLM_MODE_SGMII) || + (lmac->qlm_mode == QLM_MODE_RGMII) || + (lmac->qlm_mode == QLM_MODE_QSGMII)) { + + if (bgx_board_info[bgx_idx].phy_info[lmacid].phy_addr == -1) { + lmac->link_up = 1; + lmac->last_speed = 1000; + lmac->last_duplex = 1; + printf("BGX%d:LMAC %u link up\n", bgx_idx, lmacid); + return lmac->link_up; + } + snprintf(mii_name, sizeof(mii_name), "smi%d", + bgx_board_info[bgx_idx].phy_info[lmacid].mdio_bus); + + debug("mii_name: %s\n", mii_name); + + lmac->mii_bus = miiphy_get_dev_by_name(mii_name); + lmac->phy_addr = bgx_board_info[bgx_idx]. + phy_info[lmacid].phy_addr; + + debug("lmac->mii_bus: %p\n",lmac->mii_bus); + if (!lmac->mii_bus) { + printf("MDIO device %s not found\n", mii_name); + ret = -ENODEV; + return ret; + } + + lmac->phydev = phy_connect(lmac->mii_bus, lmac->phy_addr, + &lmac->netdev, + if_mode[lmac->qlm_mode]); + + if (!lmac->phydev) { + printf("%s: No PHY device\n", + lmac->netdev.name); + return -1; + } + + ret = phy_config(lmac->phydev); + if (ret) { + printf("%s: Could not initialize PHY %s\n", + lmac->netdev.name, lmac->phydev->dev->name); + return ret; + } + + ret = phy_startup(lmac->phydev); + debug("%s: %d\n", __FILE__, __LINE__); + if (ret) { + printf("%s: Could not initialize PHY %s\n", + lmac->netdev.name, lmac->phydev->dev->name); + } + +#ifdef CONFIG_THUNDERX_XCV + if (lmac->qlm_mode == QLM_MODE_RGMII) + xcv_setup_link(lmac->phydev->link, lmac->phydev->speed); +#endif + + lmac->link_up = lmac->phydev->link; + lmac->last_speed = lmac->phydev->speed; + lmac->last_duplex = lmac->phydev->duplex; + + debug("bgx_poll_for_link(), qlm_mode %d phy link status 0x%x," + "last speed 0x%x, duplex 0x%x\n", + lmac->qlm_mode, lmac->link_up, lmac->last_speed, lmac->last_duplex); + + if (lmac->qlm_mode != QLM_MODE_RGMII) + bgx_lmac_sgmii_set_link_speed(lmac); + + } else { + u64 status1; + u64 tx_ctl; + u64 rx_ctl; + status1 = bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SPUX_STATUS1); + tx_ctl = bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SMUX_TX_CTL); + rx_ctl = bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SMUX_RX_CTL); + + debug("BGX%d LMAC%d BGX_SPUX_STATUS2: %lx\n", + bgx_idx, lmacid, + (unsigned long)bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SPUX_STATUS2)); + debug("BGX%d LMAC%d BGX_SPUX_STATUS1: %lx\n", + bgx_idx, lmacid, + (unsigned long)bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SPUX_STATUS1)); + debug("BGX%d LMAC%d BGX_SMUX_RX_CTL: %lx\n", + bgx_idx, lmacid, + (unsigned long)bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SMUX_RX_CTL)); + debug("BGX%d LMAC%d BGX_SMUX_TX_CTL: %lx\n", + bgx_idx, lmacid, + (unsigned long)bgx_reg_read(lmac->bgx, lmac->lmacid, BGX_SMUX_TX_CTL)); + + if ((status1 & SPU_STATUS1_RCV_LNK) && + ((tx_ctl & SMU_TX_CTL_LNK_STATUS) == 0) && + ((rx_ctl & SMU_RX_CTL_STATUS) == 0)) { + lmac->link_up = 1; + if (lmac->lmac_type == 4) + lmac->last_speed = 40000; + else + lmac->last_speed = 10000; + lmac->last_duplex = 1; + } else { + lmac->link_up = 0; + lmac->last_speed = 0; + lmac->last_duplex = 0; + return bgx_xaui_check_link(lmac); + } + + lmac->last_link = lmac->link_up; + } + + printf("BGX%d:LMAC %u link %s\n", bgx_idx, lmacid, (lmac->link_up) ? "up" : "down"); + + return lmac->link_up; +} + + +static int bgx_lmac_enable(struct bgx *bgx, int8_t lmacid) +{ + struct lmac *lmac; + u64 cfg; + + lmac = &bgx->lmac[lmacid]; + lmac->bgx = bgx; + + debug("bgx_lmac_enable: lmac: %p, lmacid = %d\n", lmac, lmacid); + + if ((lmac->qlm_mode == QLM_MODE_SGMII) || + (lmac->qlm_mode == QLM_MODE_RGMII) || + (lmac->qlm_mode == QLM_MODE_QSGMII)) { + if (bgx_lmac_sgmii_init(bgx, lmacid)) { + debug("bgx_lmac_sgmii_init failed\n"); + return -1; + } + cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_GMI_TXX_APPEND); + cfg |= ((1ull << 2) | (1ull << 1)); /* FCS and PAD */ + bgx_reg_modify(bgx, lmacid, BGX_GMP_GMI_TXX_APPEND, cfg); + bgx_reg_write(bgx, lmacid, BGX_GMP_GMI_TXX_MIN_PKT, 60 - 1); + } else { + if (bgx_lmac_xaui_init(bgx, lmacid, lmac->lmac_type)) + return -1; + cfg = bgx_reg_read(bgx, lmacid, BGX_SMUX_TX_APPEND); + cfg |= ((1ull << 2) | (1ull << 1)); /* FCS and PAD */ + bgx_reg_modify(bgx, lmacid, BGX_SMUX_TX_APPEND, cfg); + bgx_reg_write(bgx, lmacid, BGX_SMUX_TX_MIN_PKT, 60 + 4); + } + + /* Enable lmac */ + bgx_reg_modify(bgx, lmacid, BGX_CMRX_CFG, + CMR_EN | CMR_PKT_RX_EN | CMR_PKT_TX_EN); + + return 0; +} + +void bgx_lmac_disable(struct bgx *bgx, uint8_t lmacid) +{ + struct lmac *lmac; + u64 cmrx_cfg; + + lmac = &bgx->lmac[lmacid]; + + cmrx_cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + cmrx_cfg &= ~(1 << 15); + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cmrx_cfg); + bgx_flush_dmac_addrs(bgx, lmacid); + + if (lmac->phydev) + phy_shutdown(lmac->phydev); + + lmac->phydev = NULL; +} + +/* Program BGXX_CMRX_CONFIG.{lmac_type,lane_to_sds} for each interface. + * And the number of LMACs used by this interface. Each lmac can be in + * programmed in a different mode, so parse each lmac one at a time. */ +static void bgx_init_hw(struct bgx *bgx) +{ + struct lmac *lmac; + int i, lmacid, count = 0, inc = 0; + char buf[40]; + static int qsgmii_configured = 0; + + for (lmacid = 0; lmacid < MAX_LMAC_PER_BGX; lmacid++) { + struct lmac *tlmac; + + lmac = &bgx->lmac[lmacid]; + /* If QLM is not programmed, skip */ + if (lmac->qlm == -1) + continue; + + switch (lmac->qlm_mode) { + case QLM_MODE_SGMII: + { + /* EBB8000 (alternative pkg) has only lane0 present on + DLM0 and DLM1, skip configuring other lanes */ + if ((bgx->bgx_id == 0) && is_altpkg) { + if (lmacid % 2) + continue; + } + lmac->lane_to_sds = lmacid; + lmac->lmac_type = 0; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: SGMII\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + } + case QLM_MODE_XAUI: + if (lmacid != 0) + continue; + lmac->lmac_type = 1; + lmac->lane_to_sds = 0xE4; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: XAUI\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_RXAUI: + if (lmacid == 0) { + lmac->lmac_type = 2; + lmac->lane_to_sds = 0x4; + } else if (lmacid == 1) { + struct lmac *tlmac; + tlmac = &bgx->lmac[2]; + if (tlmac->qlm_mode == QLM_MODE_RXAUI) { + lmac->lmac_type = 2; + lmac->lane_to_sds = 0xe; + lmac->qlm = tlmac->qlm; + } + } else + continue; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: RXAUI\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_XFI: + /* EBB8000 (alternative pkg) has only lane0 present on + DLM0 and DLM1, skip configuring other lanes */ + if ((bgx->bgx_id == 0) && is_altpkg) { + if (lmacid % 2) + continue; + } + lmac->lane_to_sds = lmacid; + lmac->lmac_type = 3; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: XFI\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_XLAUI: + if (lmacid != 0) + continue; + lmac->lmac_type = 4; + lmac->lane_to_sds = 0xE4; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: XLAUI\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_10G_KR: + /* EBB8000 (alternative pkg) has only lane0 present on + DLM0 and DLM1, skip configuring other lanes */ + if ((bgx->bgx_id == 0) && is_altpkg) { + if (lmacid % 2) + continue; + } + lmac->lane_to_sds = lmacid; + lmac->lmac_type = 3; + lmac->use_training = 1; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: 10G-KR\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_40G_KR4: + if (lmacid != 0) + continue; + lmac->lmac_type = 4; + lmac->lane_to_sds = 0xE4; + lmac->use_training = 1; + snprintf(buf, sizeof(buf), + "BGX%d QLM%d LMAC%d mode: 40G-KR4\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case QLM_MODE_RGMII: + if (lmacid != 0) + continue; + lmac->lmac_type = 5; + lmac->lane_to_sds = 0xE4; + snprintf(buf, sizeof(buf), + "BGX%d LMAC%d mode: RGMII\n", + bgx->bgx_id, lmacid); + break; + case QLM_MODE_QSGMII: + if (qsgmii_configured) + continue; + if ((lmacid == 0) || (lmacid == 2)) { + count = 4; + printf("BGX%d QLM%d LMAC%d mode: QSGMII\n", + bgx->bgx_id, lmac->qlm, lmacid); + for (i = 0; i < count; i++) { + struct lmac *l; + l = &bgx->lmac[i]; + l->lmac_type = 6; + l->qlm_mode = QLM_MODE_QSGMII; + l->lane_to_sds = lmacid + i; + if (is_bgx_port_valid(bgx->bgx_id, i)) + bgx_reg_write(bgx, i, BGX_CMRX_CFG, + (l->lmac_type << 8) | + l->lane_to_sds); + } + qsgmii_configured = 1; + } + continue; + default: + continue; + } + + + /* Reset lmac to the unused slot */ + if (is_bgx_port_valid(bgx->bgx_id, count) && + (lmac->qlm_mode != QLM_MODE_QSGMII)) { + int lmac_enable = 0; + tlmac = &bgx->lmac[count]; + tlmac->lmac_type = lmac->lmac_type; + /* Adjust lane_to_sds based on BGX-ENABLE */ + for (; (inc + count) < MAX_LMAC_PER_BGX; inc++) { + lmac_enable = bgx_board_info[bgx->bgx_id].lmac_enable[count + inc]; + if (lmac_enable) + break; + } + + if ((inc != 0) && (inc < MAX_LMAC_PER_BGX) && lmac_enable && (inc != count)) + tlmac->lane_to_sds = lmac->lane_to_sds + abs(inc - count); + else + tlmac->lane_to_sds = lmac->lane_to_sds; + tlmac->qlm = lmac->qlm; + tlmac->qlm_mode = lmac->qlm_mode; + + printf("%s", buf); + /* Initialize lmac_type and lane_to_sds */ + bgx_reg_write(bgx, count, BGX_CMRX_CFG, + (tlmac->lmac_type << 8) | + tlmac->lane_to_sds); + count += 1; + } + } + + printf("BGX%d LMACs: %d\n", bgx->bgx_id, count); + bgx->lmac_count = count; + bgx_reg_write(bgx, 0, BGX_CMR_RX_LMACS, count); + bgx_reg_write(bgx, 0, BGX_CMR_TX_LMACS, count); + + bgx_reg_modify(bgx, 0, BGX_CMR_GLOBAL_CFG, CMR_GLOBAL_CFG_FCS_STRIP); + if (bgx_reg_read(bgx, 0, BGX_CMR_BIST_STATUS)) + printf("BGX%d BIST failed\n", bgx->bgx_id); + + /* Set the backpressure AND mask */ + for (i = 0; i < bgx->lmac_count; i++) + bgx_reg_modify(bgx, 0, BGX_CMR_CHAN_MSK_AND, + ((1ULL << MAX_BGX_CHANS_PER_LMAC) - 1) << + (i * MAX_BGX_CHANS_PER_LMAC)); + + /* Disable all MAC filtering */ + for (i = 0; i < RX_DMAC_COUNT; i++) + bgx_reg_write(bgx, 0, BGX_CMR_RX_DMACX_CAM + (i * 8), 0x00); + + /* Disable MAC steering (NCSI traffic) */ + for (i = 0; i < RX_TRAFFIC_STEER_RULE_COUNT; i++) + bgx_reg_write(bgx, 0, BGX_CMR_RX_STREERING + (i * 8), 0x00); +} + +static void bgx_get_qlm_mode(struct bgx *bgx) +{ + struct lmac *lmac; + int lmacid; + + /* Read LMACx type to figure out QLM mode + * This is configured by low level firmware + */ + for (lmacid = 0; lmacid < MAX_LMAC_PER_BGX; lmacid++) { + int lmac_type; + int train_en; + int index = 0; + + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX) + || (CAVIUM_IS_MODEL(CAVIUM_CN83XX) && (bgx->bgx_id == 2))) + index = (lmacid < 2) ? 0 : 2; + + lmac = &bgx->lmac[lmacid]; + + /* check if QLM is programmed, if not, skip */ + if (lmac->qlm == -1) + continue; + + lmac_type = bgx_reg_read(bgx, index, BGX_CMRX_CFG); + lmac->lmac_type = (lmac_type >> 8) & 0x07; + debug("bgx_get_qlm_mode:%d:%d: lmac_type = %d, altpkg = %d\n", bgx->bgx_id, + lmacid, lmac->lmac_type, is_altpkg); + + train_en = (readq(CSR_PA(0, GSERX_SCRATCH(lmac->qlm))) & 0xf); + + switch(lmac->lmac_type) { + case BGX_MODE_SGMII: + if (bgx->is_rgx) { + if (lmacid == 0) { + lmac->qlm_mode = QLM_MODE_RGMII; + debug("BGX%d LMAC%d mode: RGMII\n", + bgx->bgx_id, lmacid); + } + continue; + } else { + if ((bgx->bgx_id == 0) && is_altpkg) { + if (lmacid % 2) + continue; + } + lmac->qlm_mode = QLM_MODE_SGMII; + debug("BGX%d QLM%d LMAC%d mode: SGMII\n", + bgx->bgx_id, lmac->qlm, lmacid); + } + break; + case BGX_MODE_XAUI: + if ((bgx->bgx_id == 0) && is_altpkg) + continue; + lmac->qlm_mode = QLM_MODE_XAUI; + if (lmacid != 0) + continue; + debug("BGX%d QLM%d LMAC%d mode: XAUI\n", + bgx->bgx_id, lmac->qlm, lmacid); + break; + case BGX_MODE_RXAUI: + if ((bgx->bgx_id == 0) && is_altpkg) + continue; + lmac->qlm_mode = QLM_MODE_RXAUI; + if (index == lmacid) { + debug("BGX%d QLM%d LMAC%d mode: RXAUI\n", + bgx->bgx_id, lmac->qlm, (index ? 1 : 0)); + } + break; + case BGX_MODE_XFI: + if ((bgx->bgx_id == 0) && is_altpkg) { + if (lmacid % 2) + continue; + } + if (((lmacid < 2) && (train_en & (1 << lmacid))) + || (train_en & (1 << (lmacid - 2)))) { + lmac->qlm_mode = QLM_MODE_10G_KR; + debug("BGX%d QLM%d LMAC%d mode: 10G_KR\n", + bgx->bgx_id, lmac->qlm, lmacid); + } else { + lmac->qlm_mode = QLM_MODE_XFI; + debug("BGX%d QLM%d LMAC%d mode: XFI\n", + bgx->bgx_id, lmac->qlm, lmacid); + } + break; + case BGX_MODE_XLAUI: + if ((bgx->bgx_id == 0) && is_altpkg) + continue; + if (train_en) { + lmac->qlm_mode = QLM_MODE_40G_KR4; + if (lmacid != 0) + break; + debug("BGX%d QLM%d LMAC%d mode: 40G_KR4\n", + bgx->bgx_id, lmac->qlm, lmacid); + } else { + lmac->qlm_mode = QLM_MODE_XLAUI; + if (lmacid != 0) + break; + debug("BGX%d QLM%d LMAC%d mode: XLAUI\n", + bgx->bgx_id, lmac->qlm, lmacid); + } + break; + case BGX_MODE_QSGMII: + /* If QLM is configured as QSGMII, use lmac0 */ + if (CAVIUM_IS_MODEL(CAVIUM_CN83XX) + && (lmacid == 2) + && (bgx->bgx_id != 3)) { + //lmac->qlm_mode = QLM_MODE_DISABLED; + continue; + } + + if ((lmacid == 0) || (lmacid == 2)) { + lmac->qlm_mode = QLM_MODE_QSGMII; + debug("BGX%d QLM%d LMAC%d mode: QSGMII\n", + bgx->bgx_id, lmac->qlm, lmacid); + } + break; + default: + break; + } + } +} + +void bgx_set_board_info(int bgx_id, int *mdio_bus, + int *phy_addr, bool *autoneg_dis, bool *lmac_reg, + bool *lmac_enable) +{ + unsigned int i; + + for (i = 0; i < MAX_LMAC_PER_BGX; i++) { + bgx_board_info[bgx_id].phy_info[i].phy_addr = phy_addr[i]; + bgx_board_info[bgx_id].phy_info[i].mdio_bus = mdio_bus[i]; + bgx_board_info[bgx_id].phy_info[i].autoneg_dis = autoneg_dis[i]; + bgx_board_info[bgx_id].lmac_reg[i] = lmac_reg[i]; + bgx_board_info[bgx_id].lmac_enable[i] = lmac_enable[i]; + debug("bgx_set_board_info bgx_id %d lmac %d phy_addr 0x%x mdio bus %d\n" + "autoneg_dis %d lmac_reg %d, lmac_enable = %d\n", bgx_id, i, + bgx_board_info[bgx_id].phy_info[i].phy_addr, + bgx_board_info[bgx_id].phy_info[i].mdio_bus, + bgx_board_info[bgx_id].phy_info[i].autoneg_dis, + bgx_board_info[bgx_id].lmac_reg[i], + bgx_board_info[bgx_id].lmac_enable[i]); + } +} + + +int thunderx_bgx_remove(struct udevice *dev) +{ + int lmacid; + u64 cfg; + int count = MAX_LMAC_PER_BGX; + struct bgx *bgx = dev_get_priv(dev); + + if (bgx->reg_base == NULL) + return 0; + + if (bgx->is_rgx) + count = 1; + + for (lmacid = 0; lmacid < count; lmacid++) { + struct lmac *lmac; + lmac = &bgx->lmac[lmacid]; + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); + cfg &= ~(CMR_PKT_RX_EN | CMR_PKT_TX_EN); + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); + + /* Disable PCS for 1G interface */ + if ((lmac->lmac_type == BGX_MODE_SGMII) + || (lmac->lmac_type == BGX_MODE_QSGMII)) { + cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_PCS_MRX_CTL); + cfg |= PCS_MRX_CTL_PWR_DN; + bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, cfg); + } + + debug("%s disabling bgx%d lmacid%d\n", + __func__, bgx->bgx_id, lmacid); + bgx_lmac_disable(bgx, lmacid); + } + return 0; +} + +bool alternate_pkg(void) +{ + u64 val = readq(CAVM_MIO_FUS_DAT2); + int altpkg; + + altpkg = (val >> 22) & 0x3; + + /* Figure out alt pkg by reading chip_id or lmc_mode32 on 81xx */ + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX) && (altpkg || ((val >> 30) & 0x1))) + return 2; + return altpkg; +} + +int thunderx_bgx_probe(struct udevice *dev) +{ + int err; + struct bgx *bgx = dev_get_priv(dev); + uint8_t lmac = 0; + int qlm[4] = {-1, -1, -1, -1}; + int bgx_idx, node; + size_t size; + int inc = 1; + + bgx->reg_base = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + if (!bgx->reg_base) { + printf("%s No PCI region found\n", dev->name); + return -EINVAL; + } + is_altpkg = alternate_pkg(); + + // bgx_init_of_phy() +#ifdef CONFIG_THUNDERX_XCV + /* Use FAKE BGX2 for RGX interface */ + if ((((uintptr_t)bgx->reg_base >> 24) & 0xf) == 0x8) { + int np; + const char* phy_mode; + int phy_interface = -1; + + bgx->bgx_id = 2; + bgx->is_rgx = true; + for (lmac = 0; lmac < MAX_LMAC_PER_BGX; lmac++) { + if (lmac == 0) { + bgx->lmac[lmac].lmacid = 0; + bgx->lmac[lmac].qlm = 0; + } else { + bgx->lmac[lmac].qlm = -1; + } + } + np = fdt_first_subnode(gd->fdt_blob, dev_of_offset(dev)); + np = fdtdec_lookup_phandle(gd->fdt_blob, np, "phy-handle"); + phy_mode = fdt_getprop(gd->fdt_blob, np, "phy-mode", NULL); + phy_interface = phy_get_interface_by_name(phy_mode); + if (phy_interface == -1) + phy_interface = PHY_INTERFACE_MODE_RGMII; + xcv_init_hw(phy_interface); + goto skip_qlm_config; + } +#endif + + node = node_id(bgx->reg_base); + bgx_idx = ((uintptr_t)bgx->reg_base >> 24) & 3; + bgx->bgx_id = (node * CONFIG_MAX_BGX_PER_NODE) + bgx_idx; + + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX)) + inc = 2; + else if (CAVIUM_IS_MODEL(CAVIUM_CN83XX) && (bgx_idx == 2)) + inc = 2; + + for (lmac = 0; lmac < MAX_LMAC_PER_BGX; lmac += inc) { + /* BGX3 (DLM4), has only 2 lanes */ + if (CAVIUM_IS_MODEL(CAVIUM_CN83XX) && (bgx_idx == 3) && lmac >= 2) + continue; + qlm[lmac + 0] = get_qlm_for_bgx(node, bgx_idx, lmac); + /* Each DLM has 2 lanes, configure both lanes with + same qlm configuration */ + if (inc == 2) + qlm[lmac + 1] = qlm[lmac]; + debug("qlm[%d] = %d\n", lmac, qlm[lmac]); + } + + /* A BGX can take 1 or 2 DLMs, if both the DLMs are not configured + as BGX, then return, nothing to initialize */ + if (CAVIUM_IS_MODEL(CAVIUM_CN81XX)) + if ((qlm[0] == -1) && (qlm[2] == -1)) + return -ENODEV; + + /* MAP configuration registers */ + for (lmac = 0; lmac < MAX_LMAC_PER_BGX; lmac++) { + bgx->lmac[lmac].qlm = qlm[lmac]; + bgx->lmac[lmac].lmacid = lmac; + } + +#ifdef CONFIG_THUNDERX_XCV +skip_qlm_config: +#endif + bgx_vnic[bgx->bgx_id] = bgx; + bgx_get_qlm_mode(bgx); + debug("bgx_vnic[%u]: %p\n", bgx->bgx_id, bgx); + + bgx_init_hw(bgx); + + /* Enable all LMACs */ + for (lmac = 0; lmac < bgx->lmac_count; lmac++) { + snprintf(bgx->lmac[lmac].netdev.name, + sizeof(bgx->lmac[lmac].netdev.name), + "lmac%d", lmac); + + err = bgx_lmac_enable(bgx, lmac); + if (err) { + printf("BGX%d failed to enable lmac%d\n", + bgx->bgx_id, lmac); + } + } + + return 0; +} + +static const struct misc_ops thunderx_bgx_ops = { +}; + +static const struct udevice_id thunderx_bgx_ids[] = { + { .compatible = "cavium,thunder-8890-bgx" }, + {} +}; + +U_BOOT_DRIVER(thunderx_bgx) = { + .name = "thunderx_bgx", + .id = UCLASS_MISC, + .probe = thunderx_bgx_probe, + .remove = thunderx_bgx_remove, + .of_match = thunderx_bgx_ids, + .ops = &thunderx_bgx_ops, + .priv_auto_alloc_size = sizeof(struct bgx), + .flags = DM_FLAG_OS_PREPARE, +}; + +static struct pci_device_id thunderx_pci_bgx_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_BGX) }, + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_RGX) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_bgx, thunderx_pci_bgx_supported); + diff --git a/drivers/net/cavium/thunder_bgx.h b/drivers/net/cavium/thunder_bgx.h new file mode 100644 index 0000000000..ce1bb13f6d --- /dev/null +++ b/drivers/net/cavium/thunder_bgx.h @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#ifndef THUNDER_BGX_H +#define THUNDER_BGX_H + +/* PCI device ID */ +#define PCI_DEVICE_ID_THUNDER_BGX 0xA026 +#define PCI_DEVICE_ID_THUNDER_RGX 0xA054 + +#define MAX_LMAC_PER_BGX 4 +#define MAX_BGX_CHANS_PER_LMAC 16 +#define MAX_DMAC_PER_LMAC 8 +#define MAX_FRAME_SIZE 9216 + +#define MAX_DMAC_PER_LMAC_TNS_BYPASS_MODE 2 + +#define MAX_LMAC (CONFIG_MAX_BGX_PER_NODE * MAX_LMAC_PER_BGX) + +/* Registers */ +#define BGX_CMRX_CFG 0x00 +#define CMR_PKT_TX_EN BIT_ULL(13) +#define CMR_PKT_RX_EN BIT_ULL(14) +#define CMR_EN BIT_ULL(15) +#define BGX_CMR_GLOBAL_CFG 0x08 +#define CMR_GLOBAL_CFG_FCS_STRIP BIT_ULL(6) +#define BGX_CMRX_RX_ID_MAP 0x60 +#define BGX_CMRX_RX_STAT0 0x70 +#define BGX_CMRX_RX_STAT1 0x78 +#define BGX_CMRX_RX_STAT2 0x80 +#define BGX_CMRX_RX_STAT3 0x88 +#define BGX_CMRX_RX_STAT4 0x90 +#define BGX_CMRX_RX_STAT5 0x98 +#define BGX_CMRX_RX_STAT6 0xA0 +#define BGX_CMRX_RX_STAT7 0xA8 +#define BGX_CMRX_RX_STAT8 0xB0 +#define BGX_CMRX_RX_STAT9 0xB8 +#define BGX_CMRX_RX_STAT10 0xC0 +#define BGX_CMRX_RX_BP_DROP 0xC8 +#define BGX_CMRX_RX_DMAC_CTL 0x0E8 +#define BGX_CMRX_RX_FIFO_LEN 0x108 +#define BGX_CMR_RX_DMACX_CAM 0x200 +#define RX_DMACX_CAM_EN BIT_ULL(48) +#define RX_DMACX_CAM_LMACID(x) (x << 49) +#define RX_DMAC_COUNT 32 +#define BGX_CMR_RX_STREERING 0x300 +#define RX_TRAFFIC_STEER_RULE_COUNT 8 +#define BGX_CMR_CHAN_MSK_AND 0x450 +#define BGX_CMR_BIST_STATUS 0x460 +#define BGX_CMR_RX_LMACS 0x468 +#define BGX_CMRX_TX_FIFO_LEN 0x518 +#define BGX_CMRX_TX_STAT0 0x600 +#define BGX_CMRX_TX_STAT1 0x608 +#define BGX_CMRX_TX_STAT2 0x610 +#define BGX_CMRX_TX_STAT3 0x618 +#define BGX_CMRX_TX_STAT4 0x620 +#define BGX_CMRX_TX_STAT5 0x628 +#define BGX_CMRX_TX_STAT6 0x630 +#define BGX_CMRX_TX_STAT7 0x638 +#define BGX_CMRX_TX_STAT8 0x640 +#define BGX_CMRX_TX_STAT9 0x648 +#define BGX_CMRX_TX_STAT10 0x650 +#define BGX_CMRX_TX_STAT11 0x658 +#define BGX_CMRX_TX_STAT12 0x660 +#define BGX_CMRX_TX_STAT13 0x668 +#define BGX_CMRX_TX_STAT14 0x670 +#define BGX_CMRX_TX_STAT15 0x678 +#define BGX_CMRX_TX_STAT16 0x680 +#define BGX_CMRX_TX_STAT17 0x688 +#define BGX_CMR_TX_LMACS 0x1000 + +#define BGX_SPUX_CONTROL1 0x10000 +#define SPU_CTL_LOW_POWER BIT_ULL(11) +#define SPU_CTL_LOOPBACK BIT_ULL(14) +#define SPU_CTL_RESET BIT_ULL(15) +#define BGX_SPUX_STATUS1 0x10008 +#define SPU_STATUS1_RCV_LNK BIT_ULL(2) +#define BGX_SPUX_STATUS2 0x10020 +#define SPU_STATUS2_RCVFLT BIT_ULL(10) +#define BGX_SPUX_BX_STATUS 0x10028 +#define SPU_BX_STATUS_RX_ALIGN BIT_ULL(12) +#define BGX_SPUX_BR_STATUS1 0x10030 +#define SPU_BR_STATUS_BLK_LOCK BIT_ULL(0) +#define SPU_BR_STATUS_RCV_LNK BIT_ULL(12) +#define BGX_SPUX_BR_PMD_CRTL 0x10068 +#define SPU_PMD_CRTL_TRAIN_EN BIT_ULL(1) +#define BGX_SPUX_BR_PMD_LP_CUP 0x10078 +#define BGX_SPUX_BR_PMD_LD_CUP 0x10088 +#define BGX_SPUX_BR_PMD_LD_REP 0x10090 +#define BGX_SPUX_FEC_CONTROL 0x100A0 +#define SPU_FEC_CTL_FEC_EN BIT_ULL(0) +#define SPU_FEC_CTL_ERR_EN BIT_ULL(1) +#define BGX_SPUX_AN_CONTROL 0x100C8 +#define SPU_AN_CTL_AN_EN BIT_ULL(12) +#define SPU_AN_CTL_XNP_EN BIT_ULL(13) +#define SPU_AN_CTL_AN_RESTART BIT_ULL(15) +#define BGX_SPUX_AN_STATUS 0x100D0 +#define SPU_AN_STS_AN_COMPLETE BIT_ULL(5) +#define BGX_SPUX_AN_ADV 0x100D8 +#define BGX_SPUX_MISC_CONTROL 0x10218 +#define SPU_MISC_CTL_INTLV_RDISP BIT_ULL(10) +#define SPU_MISC_CTL_RX_DIS BIT_ULL(12) +#define BGX_SPUX_INT 0x10220 /* +(0..3) << 20 */ +#define BGX_SPUX_INT_W1S 0x10228 +#define BGX_SPUX_INT_ENA_W1C 0x10230 +#define BGX_SPUX_INT_ENA_W1S 0x10238 +#define BGX_SPU_DBG_CONTROL 0x10300 +#define SPU_DBG_CTL_AN_ARB_LINK_CHK_EN BIT_ULL(18) +#define SPU_DBG_CTL_AN_NONCE_MCT_DIS BIT_ULL(29) + +#define BGX_SMUX_RX_INT 0x20000 +#define BGX_SMUX_RX_JABBER 0x20030 +#define BGX_SMUX_RX_CTL 0x20048 +#define SMU_RX_CTL_STATUS (3ull << 0) +#define BGX_SMUX_TX_APPEND 0x20100 +#define SMU_TX_APPEND_FCS_D BIT_ULL(2) +#define BGX_SMUX_TX_PAUSE_PKT_TIME 0x20110 +#define BGX_SMUX_TX_MIN_PKT 0x20118 +#define BGX_SMUX_TX_PAUSE_PKT_INTERVAL 0x20120 +#define BGX_SMUX_TX_PAUSE_ZERO 0x20138 +#define BGX_SMUX_TX_INT 0x20140 +#define BGX_SMUX_TX_CTL 0x20178 +#define SMU_TX_CTL_DIC_EN BIT_ULL(0) +#define SMU_TX_CTL_UNI_EN BIT_ULL(1) +#define SMU_TX_CTL_LNK_STATUS (3ull << 4) +#define BGX_SMUX_TX_THRESH 0x20180 +#define BGX_SMUX_CTL 0x20200 +#define SMU_CTL_RX_IDLE BIT_ULL(0) +#define SMU_CTL_TX_IDLE BIT_ULL(1) +#define BGX_SMUX_CBFC_CTL 0x20218 +#define RX_EN BIT_ULL(0) +#define TX_EN BIT_ULL(1) +#define BCK_EN BIT_ULL(2) +#define DRP_EN BIT_ULL(3) + +#define BGX_GMP_PCS_MRX_CTL 0x30000 +#define PCS_MRX_CTL_RST_AN BIT_ULL(9) +#define PCS_MRX_CTL_PWR_DN BIT_ULL(11) +#define PCS_MRX_CTL_AN_EN BIT_ULL(12) +#define PCS_MRX_CTL_LOOPBACK1 BIT_ULL(14) +#define PCS_MRX_CTL_RESET BIT_ULL(15) +#define BGX_GMP_PCS_MRX_STATUS 0x30008 +#define PCS_MRX_STATUS_LINK BIT_ULL(2) +#define PCS_MRX_STATUS_AN_CPT BIT_ULL(5) +#define BGX_GMP_PCS_ANX_ADV 0x30010 +#define BGX_GMP_PCS_ANX_AN_RESULTS 0x30020 +#define BGX_GMP_PCS_LINKX_TIMER 0x30040 +#define PCS_LINKX_TIMER_COUNT 0x1E84 +#define BGX_GMP_PCS_SGM_AN_ADV 0x30068 +#define BGX_GMP_PCS_MISCX_CTL 0x30078 +#define PCS_MISC_CTL_MODE BIT_ULL(8) +#define PCS_MISC_CTL_DISP_EN BIT_ULL(13) +#define PCS_MISC_CTL_GMX_ENO BIT_ULL(11) +#define PCS_MISC_CTL_SAMP_PT_MASK 0x7Full +#define BGX_GMP_GMI_PRTX_CFG 0x38020 +#define GMI_PORT_CFG_SPEED BIT_ULL(1) +#define GMI_PORT_CFG_DUPLEX BIT_ULL(2) +#define GMI_PORT_CFG_SLOT_TIME BIT_ULL(3) +#define GMI_PORT_CFG_SPEED_MSB BIT_ULL(8) +#define GMI_PORT_CFG_RX_IDLE BIT_ULL(12) +#define GMI_PORT_CFG_TX_IDLE BIT_ULL(13) +#define BGX_GMP_GMI_RXX_JABBER 0x38038 +#define BGX_GMP_GMI_TXX_THRESH 0x38210 +#define BGX_GMP_GMI_TXX_APPEND 0x38218 +#define BGX_GMP_GMI_TXX_SLOT 0x38220 +#define BGX_GMP_GMI_TXX_BURST 0x38228 +#define BGX_GMP_GMI_TXX_MIN_PKT 0x38240 +#define BGX_GMP_GMI_TXX_SGMII_CTL 0x38300 +#define BGX_GMP_GMI_TXX_INT 0x38500 +#define BGX_GMP_GMI_TXX_INT_W1S 0x38508 +#define BGX_GMP_GMI_TXX_INT_ENA_W1C 0x38510 +#define BGX_GMP_GMI_TXX_INT_ENA_W1S 0x38518 +#define GMI_TXX_INT_PTP_LOST BIT_ULL(4) +#define GMI_TXX_INT_LATE_COL BIT_ULL(3) +#define GMI_TXX_INT_XSDEF BIT_ULL(2) +#define GMI_TXX_INT_XSCOL BIT_ULL(1) +#define GMI_TXX_INT_UNDFLW BIT_ULL(0) + +#define BGX_MSIX_VEC_0_29_ADDR 0x400000 /* +(0..29) << 4 */ +#define BGX_MSIX_VEC_0_29_CTL 0x400008 +#define BGX_MSIX_PBA_0 0x4F0000 + +/* MSI-X interrupts */ +#define BGX_MSIX_VECTORS 30 +#define BGX_LMAC_VEC_OFFSET 7 +#define BGX_MSIX_VEC_SHIFT 4 + +#define CMRX_INT 0 +#define SPUX_INT 1 +#define SMUX_RX_INT 2 +#define SMUX_TX_INT 3 +#define GMPX_PCS_INT 4 +#define GMPX_GMI_RX_INT 5 +#define GMPX_GMI_TX_INT 6 +#define CMR_MEM_INT 28 +#define SPU_MEM_INT 29 + +#define LMAC_INTR_LINK_UP BIT(0) +#define LMAC_INTR_LINK_DOWN BIT(1) + +/* RX_DMAC_CTL configuration*/ +enum MCAST_MODE { + MCAST_MODE_REJECT, + MCAST_MODE_ACCEPT, + MCAST_MODE_CAM_FILTER, + RSVD +}; + +#define BCAST_ACCEPT 1 +#define CAM_ACCEPT 1 + +int thunderx_bgx_initialize(unsigned int bgx_idx, unsigned int node); +void bgx_add_dmac_addr(uint64_t dmac, int node, int bgx_idx, int lmac); +void bgx_get_count(int node, int *bgx_count); +int bgx_get_lmac_count(int node, int bgx); +void bgx_print_stats(int bgx_idx, int lmac); +void xcv_init_hw(int phy_mode); +void xcv_setup_link(bool link_up, int link_speed); + +enum qlm_mode { + QLM_MODE_SGMII, /* SGMII, each lane independent */ + QLM_MODE_XAUI, /* 1 XAUI or DXAUI, 4 lanes */ + QLM_MODE_RXAUI, /* 2 RXAUI, 2 lanes each */ + QLM_MODE_XFI, /* 4 XFI, 1 lane each */ + QLM_MODE_XLAUI, /* 1 XLAUI, 4 lanes each */ + QLM_MODE_10G_KR, /* 4 10GBASE-KR, 1 lane each */ + QLM_MODE_40G_KR4, /* 1 40GBASE-KR4, 4 lanes each */ + QLM_MODE_QSGMII, /* 4 QSGMII, each lane independent */ + QLM_MODE_RGMII, /* 1 RGX */ +}; + +struct phy_info { + int mdio_bus; + int phy_addr; + bool autoneg_dis; +}; + +struct bgx_board_info { + struct phy_info phy_info[MAX_LMAC_PER_BGX]; + bool lmac_reg[MAX_LMAC_PER_BGX]; + bool lmac_enable[MAX_LMAC_PER_BGX]; +}; + +enum LMAC_TYPE { + BGX_MODE_SGMII = 0, /* 1 lane, 1.250 Gbaud */ + BGX_MODE_XAUI = 1, /* 4 lanes, 3.125 Gbaud */ + BGX_MODE_DXAUI = 1, /* 4 lanes, 6.250 Gbaud */ + BGX_MODE_RXAUI = 2, /* 2 lanes, 6.250 Gbaud */ + BGX_MODE_XFI = 3, /* 1 lane, 10.3125 Gbaud */ + BGX_MODE_XLAUI = 4, /* 4 lanes, 10.3125 Gbaud */ + BGX_MODE_10G_KR = 3,/* 1 lane, 10.3125 Gbaud */ + BGX_MODE_40G_KR = 4,/* 4 lanes, 10.3125 Gbaud */ + BGX_MODE_RGMII = 5, + BGX_MODE_QSGMII = 6, + BGX_MODE_INVALID = 7, +}; + +#endif /* THUNDER_BGX_H */ diff --git a/drivers/net/cavium/thunder_xcv.c b/drivers/net/cavium/thunder_xcv.c new file mode 100644 index 0000000000..b4342e16b6 --- /dev/null +++ b/drivers/net/cavium/thunder_xcv.c @@ -0,0 +1,190 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <config.h> +#include <common.h> +#include <net.h> +#include <dm.h> +#include <pci.h> +#include <misc.h> +#include <netdev.h> +#include <malloc.h> +#include <miiphy.h> +#include <asm/io.h> +#include <errno.h> + +#include <asm/arch-thunderx/thunderx.h> +#include <asm/arch-thunderx/thunderx_vnic.h> + +/* Register offsets */ +#define XCV_RESET 0x00 +#define PORT_EN BIT_ULL(63) +#define CLK_RESET BIT_ULL(15) +#define DLL_RESET BIT_ULL(11) +#define COMP_EN BIT_ULL(7) +#define TX_PKT_RESET BIT_ULL(3) +#define TX_DATA_RESET BIT_ULL(2) +#define RX_PKT_RESET BIT_ULL(1) +#define RX_DATA_RESET BIT_ULL(0) +#define XCV_DLL_CTL 0x10 +#define CLKRX_BYP BIT_ULL(23) +#define CLKTX_BYP BIT_ULL(15) +#define XCV_COMP_CTL 0x20 +#define DRV_BYP BIT_ULL(63) +#define XCV_CTL 0x30 +#define XCV_INT 0x40 +#define XCV_INT_W1S 0x48 +#define XCV_INT_ENA_W1C 0x50 +#define XCV_INT_ENA_W1S 0x58 +#define XCV_INBND_STATUS 0x80 +#define XCV_BATCH_CRD_RET 0x100 + +struct xcv { + void __iomem *reg_base; + struct pci_dev *pdev; +}; + +static struct xcv *xcv; + +void xcv_init_hw(int phy_mode) +{ + u64 cfg; + + /* Take DLL out of reset */ + cfg = readq(CAVM_XCVX_RESET); + cfg &= ~DLL_RESET; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + + /* Take clock tree out of reset */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg &= ~CLK_RESET; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + /* Wait for DLL to lock */ + udelay(1000); + + /* enable/bypass DLL providing MAC based internal TX/RX delays */ + cfg = readq(CAVM_XCVX_RESET + XCV_DLL_CTL); + cfg &= ~0xffff00; + switch (phy_mode) { + /* RX and TX delays are added by the MAC */ + case PHY_INTERFACE_MODE_RGMII: + break; + /* internal RX and TX delays provided by the PHY */ + case PHY_INTERFACE_MODE_RGMII_ID: + cfg |= CLKRX_BYP; + cfg |= CLKTX_BYP; + break; + /* internal RX delay provided by the PHY, the MAC + * should not add an RX delay in this case + */ + case PHY_INTERFACE_MODE_RGMII_RXID: + cfg |= CLKRX_BYP; + break; + /* internal TX delay provided by the PHY, the MAC + * should not add an TX delay in this case + */ + case PHY_INTERFACE_MODE_RGMII_TXID: + cfg |= CLKRX_BYP; + break; + } + writeq(cfg, CAVM_XCVX_RESET + XCV_DLL_CTL); + + /* Enable compensation controller and force the + * write to be visible to HW by readig back. + */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg |= COMP_EN; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + readq(CAVM_XCVX_RESET + XCV_RESET); + /* Wait for compensation state machine to lock */ + udelay(10000); + + /* enable the XCV block */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg |= PORT_EN; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg |= CLK_RESET; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); +} + +/* + * Configure XCV link based on the speed + * link_up : Set to 1 when link is up otherwise 0 + * link_speed: The speed of the link. + */ +void xcv_setup_link(bool link_up, int link_speed) +{ + u64 cfg; + int speed = 2; + + if (link_speed == 100) + speed = 1; + else if (link_speed == 10) + speed = 0; + + if (link_up) { + /* set operating speed */ + cfg = readq(CAVM_XCVX_RESET + XCV_CTL); + cfg &= ~0x03; + cfg |= speed; + writeq(cfg, CAVM_XCVX_RESET + XCV_CTL); + + /* Reset datapaths */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg |= TX_DATA_RESET | RX_DATA_RESET; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + + /* Enable the packet flow */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg |= TX_PKT_RESET | RX_PKT_RESET; + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + + /* Return credits to RGX */ + writeq(0x01, CAVM_XCVX_RESET + XCV_BATCH_CRD_RET); + } else { + /* Disable packet flow */ + cfg = readq(CAVM_XCVX_RESET + XCV_RESET); + cfg &= ~(TX_PKT_RESET | RX_PKT_RESET); + writeq(cfg, CAVM_XCVX_RESET + XCV_RESET); + readq(CAVM_XCVX_RESET + XCV_RESET); + } +} + +int thunderx_xcv_probe(struct udevice *dev) +{ + size_t size; + + xcv = dev_get_priv(dev); + if (!xcv) + return -ENOMEM; + xcv->reg_base = dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM); + + return 0; +} + +static const struct misc_ops thunderx_xcv_ops = { +}; + +static const struct udevice_id thunderx_xcv_ids[] = { + { .compatible = "cavium,xcv" }, + {} +}; + +U_BOOT_DRIVER(thunderx_xcv) = { + .name = "thunderx_xcv", + .id = UCLASS_MISC, + .probe = thunderx_xcv_probe, + .of_match = thunderx_xcv_ids, + .ops = &thunderx_xcv_ops, + .priv_auto_alloc_size = sizeof(struct xcv), +}; + +static struct pci_device_id thunderx_pci_xcv_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_NIC_XCV) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_xcv, thunderx_pci_xcv_supported); diff --git a/drivers/net/cavium/thunderx_smi.c b/drivers/net/cavium/thunderx_smi.c new file mode 100644 index 0000000000..1d0dd3c0dd --- /dev/null +++ b/drivers/net/cavium/thunderx_smi.c @@ -0,0 +1,388 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Cavium Inc + */ +#include <common.h> +#include <dm.h> +#include <pci.h> +#include <phy.h> +#include <miiphy.h> +#include <misc.h> +#include <malloc.h> +#include <asm/io.h> +#include <environment.h> +#include <linux/ctype.h> + +#define PCI_DEVICE_ID_THUNDERX_SMI 0xa02b + +DECLARE_GLOBAL_DATA_PTR; + +enum thunderx_smi_mode { + CLAUSE22 = 0, + CLAUSE45 = 1, +}; + +enum { + SMI_OP_C22_WRITE = 0, + SMI_OP_C22_READ = 1, + + SMI_OP_C45_ADDR = 0, + SMI_OP_C45_WRITE = 1, + SMI_OP_C45_PRIA = 2, + SMI_OP_C45_READ = 3, +}; + +union smi_x_clk { + u64 u; + struct smi_x_clk_s { + int phase:8; + int sample:4; + int preamble:1; + int clk_idle:1; + int reserved_14_14:1; + int sample_mode:1; + int sample_hi:5; + int reserved_21_23:3; + int mode:1; + } s; +}; + +union smi_x_cmd { + u64 u; + struct smi_x_cmd_s { + int reg_adr:5; + int reserved_5_7:3; + int phy_adr:5; + int reserved_13_15:3; + int phy_op:2; + } s; +}; + +union smi_x_wr_dat { + u64 u; + struct smi_x_wr_dat_s { + int dat:16; + int val:1; + int pending:1; + } s; +}; + +union smi_x_rd_dat { + u64 u; + struct smi_x_rd_dat_s { + int dat:16; + int val:1; + int pending:1; + } s; +}; + +union smi_x_en { + u64 u; + struct smi_x_en_s { + int en:1; + } s; +}; + +#define SMI_X_RD_DAT 0x10ull +#define SMI_X_WR_DAT 0x08ull +#define SMI_X_CMD 0x00ull +#define SMI_X_CLK 0x18ull +#define SMI_X_EN 0x20ull + +struct thunderx_smi_priv { + void __iomem *baseaddr; + enum thunderx_smi_mode mode; +}; + +#define MDIO_TIMEOUT 10000 + +void thunderx_smi_setmode(struct mii_dev *bus, enum thunderx_smi_mode mode) +{ + struct thunderx_smi_priv *priv = bus->priv; + union smi_x_clk smix_clk; + + smix_clk.u = readq(priv->baseaddr + SMI_X_CLK); + smix_clk.s.mode = mode; + smix_clk.s.preamble = mode == CLAUSE45; + writeq(smix_clk.u, priv->baseaddr + SMI_X_CLK); + + priv->mode = mode; +} + +int thunderx_c45_addr(struct mii_dev *bus, int addr, int devad, int regnum) +{ + struct thunderx_smi_priv *priv = bus->priv; + + union smi_x_cmd smix_cmd; + union smi_x_wr_dat smix_wr_dat; + unsigned long timeout = MDIO_TIMEOUT; + + smix_wr_dat.u = 0; + smix_wr_dat.s.dat = regnum; + + writeq(smix_wr_dat.u, priv->baseaddr + SMI_X_WR_DAT); + + smix_cmd.u = 0; + smix_cmd.s.phy_op = SMI_OP_C45_ADDR; + smix_cmd.s.phy_adr = addr; + smix_cmd.s.reg_adr = devad; + + writeq(smix_cmd.u, priv->baseaddr + SMI_X_CMD); + + do { + smix_wr_dat.u = readq(priv->baseaddr + SMI_X_WR_DAT); + udelay(100); + timeout--; + } while (smix_wr_dat.s.pending && timeout); + + return timeout == 0; +} + +int thunderx_phy_read(struct mii_dev *bus, int addr, int devad, int regnum) +{ + struct thunderx_smi_priv *priv = bus->priv; + union smi_x_cmd smix_cmd; + union smi_x_rd_dat smix_rd_dat; + unsigned long timeout = MDIO_TIMEOUT; + int ret; + + enum thunderx_smi_mode mode = (devad < 0) ? CLAUSE22 : CLAUSE45; + + debug("RD: Mode: %u, baseaddr: %p, addr: %d, devad: %d, reg: %d\n", + mode, priv->baseaddr, addr, devad, regnum); + + thunderx_smi_setmode(bus, mode); + + if (mode == CLAUSE45) { + ret = thunderx_c45_addr(bus, addr, devad, regnum); + + debug("RD: ret: %u\n", ret); + + if (ret) + return 0; + } + + smix_cmd.u = 0; + smix_cmd.s.phy_adr = addr; + + + if (mode == CLAUSE45) { + smix_cmd.s.reg_adr = devad; + smix_cmd.s.phy_op = SMI_OP_C45_READ; + } else { + smix_cmd.s.reg_adr = regnum; + smix_cmd.s.phy_op = SMI_OP_C22_READ; + } + + writeq(smix_cmd.u, priv->baseaddr + SMI_X_CMD); + + do { + smix_rd_dat.u = readq(priv->baseaddr + SMI_X_RD_DAT); + udelay(10); + timeout--; + } while (smix_rd_dat.s.pending && timeout); + + debug("SMIX_RD_DAT: %lx\n", (unsigned long)smix_rd_dat.u); + + if (smix_rd_dat.s.val) + return smix_rd_dat.s.dat & 0xffff; + return -1; +} + +int thunderx_phy_write(struct mii_dev *bus, int addr, int devad, int regnum, + u16 value) +{ + struct thunderx_smi_priv *priv = bus->priv; + union smi_x_cmd smix_cmd; + union smi_x_wr_dat smix_wr_dat; + unsigned long timeout = MDIO_TIMEOUT; + int ret; + + enum thunderx_smi_mode mode = (devad < 0) ? CLAUSE22 : CLAUSE45; + + debug("WR: Mode: %u, baseaddr: %p, addr: %d, devad: %d, reg: %d\n", + mode, priv->baseaddr, addr, devad, regnum); + + if (mode == CLAUSE45) { + ret = thunderx_c45_addr(bus, addr, devad, regnum); + + debug("WR: ret: %u\n", ret); + + if (ret) + return ret; + } + + smix_wr_dat.u = 0; + smix_wr_dat.s.dat = value; + + writeq(smix_wr_dat.u, priv->baseaddr + SMI_X_WR_DAT); + + smix_cmd.u = 0; + smix_cmd.s.phy_adr = addr; + + if (mode == CLAUSE45) { + smix_cmd.s.reg_adr = devad; + smix_cmd.s.phy_op = SMI_OP_C45_WRITE; + } else { + smix_cmd.s.reg_adr = regnum; + smix_cmd.s.phy_op = SMI_OP_C22_WRITE; + } + + writeq(smix_cmd.u, priv->baseaddr + SMI_X_CMD); + + do { + smix_wr_dat.u = readq(priv->baseaddr + SMI_X_WR_DAT); + udelay(10); + timeout--; + } while (smix_wr_dat.s.pending && timeout); + + debug("SMIX_WR_DAT: %lx\n", (unsigned long)smix_wr_dat.u); + + return timeout == 0; +} + +int thunderx_smi_reset(struct mii_dev *bus) +{ + struct thunderx_smi_priv *priv = bus->priv; + + union smi_x_en smi_en; + + smi_en.s.en = 0; + writeq(smi_en.u, priv->baseaddr + SMI_X_EN); + + smi_en.s.en = 1; + writeq(smi_en.u, priv->baseaddr + SMI_X_EN); + + thunderx_smi_setmode(bus, CLAUSE22); + + return 0; +} + +/* PHY XS initialization, primarily for RXAUI + * + */ +int __cavm_if_phy_xs_init(struct mii_dev *bus, int phy_addr) +{ + int reg; + ulong start_time; + int phy_id1, phy_id2; + int oui, model_number; + + phy_id1 = thunderx_phy_read(bus, phy_addr, 1, 0x2); + phy_id2 = thunderx_phy_read(bus, phy_addr, 1, 0x3); + model_number = (phy_id2 >> 4) & 0x3F; + debug("%s model %x\n", __func__,model_number); + oui = phy_id1; + oui <<= 6; + oui |= (phy_id2 >> 10) & 0x3F; + debug("%s oui %x\n", __func__,oui); + switch (oui) + { + case 0x5016: + if (model_number == 9) + { + debug("%s +\n", __func__); + /* Perform hardware reset in XGXS control */ + reg = thunderx_phy_read(bus, phy_addr, 4, 0x0); + if ((reg & 0xffff) < 0) + goto read_error; + reg |= 0x8000; + thunderx_phy_write(bus, phy_addr, 4, 0x0, reg); + + start_time = get_timer(0); + do { + reg = thunderx_phy_read(bus, phy_addr, 4, 0x0); + if ((reg & 0xffff) < 0) + goto read_error; + } while ((reg & 0x8000) && get_timer(start_time) < 500); + if (reg & 0x8000) { + printf("Hardware reset for M88X3120 PHY failed, MII_BMCR: 0x%x\n", reg); + return -1; + } + /* program 4.49155 with 0x5 */ + thunderx_phy_write(bus, phy_addr, 4, 0xc003, 0x5); + } + break; + default: + break; + } + + return 0; + +read_error: + debug("M88X3120 PHY config read failed\n"); + return -1; +} + +int thunderx_smi_probe(struct udevice *dev) +{ + size_t size; + int ret, subnode, cnt = 0, node = dev->node.of_offset; + struct mii_dev *bus; + struct thunderx_smi_priv *priv; + pci_dev_t bdf = dm_pci_get_bdf(dev); + + debug("SMI PCI device: %x\n", bdf); + dev->req_seq = PCI_FUNC(bdf); + if (!dm_pci_map_bar(dev, 0, &size, PCI_REGION_MEM)) { + printf("%s: Failed to map PCI region for bdf %x\n", __func__, + bdf); + return -1; + } + + fdt_for_each_subnode(subnode, gd->fdt_blob, node) { + ret = fdt_node_check_compatible(gd->fdt_blob, subnode, + "cavium,thunder-8890-mdio"); + if (ret) + continue; + + bus = mdio_alloc(); + priv = malloc(sizeof(*priv)); + if (!bus || !priv) { + printf("Failed to allocate ThunderX MDIO bus # %u\n", + dev->seq); + return -1; + } + + bus->read = thunderx_phy_read; + bus->write = thunderx_phy_write; + bus->reset = thunderx_smi_reset; + bus->priv = priv; + + priv->mode = CLAUSE22; + priv->baseaddr = (void __iomem *)fdtdec_get_addr(gd->fdt_blob, + subnode, "reg"); + debug("mdio base addr %p\n", priv->baseaddr); + + /* use given name or generate its own unique name */ + snprintf(bus->name, MDIO_NAME_LEN, "smi%d", cnt++); + + ret = mdio_register(bus); + if (ret) + return ret; + } + return 0; +} + +static const struct misc_ops thunderx_smi_ops = { +}; + +static const struct udevice_id thunderx_smi_ids[] = { + { .compatible = "cavium,thunder-8890-mdio-nexus" }, + {} +}; + +U_BOOT_DRIVER(thunderx_smi) = { + .name = "thunderx_smi", + .id = UCLASS_MISC, + .probe = thunderx_smi_probe, + .of_match = thunderx_smi_ids, + .ops = &thunderx_smi_ops, +}; + +static struct pci_device_id thunderx_pci_smi_supported[] = { + { PCI_VDEVICE(CAVIUM, PCI_DEVICE_ID_THUNDERX_SMI) }, + { }, +}; + +U_BOOT_PCI_DEVICE(thunderx_smi, thunderx_pci_smi_supported); diff --git a/include/configs/thunderx_81xx.h b/include/configs/thunderx_81xx.h index c32aa7844c..30c2dc404e 100644 --- a/include/configs/thunderx_81xx.h +++ b/include/configs/thunderx_81xx.h @@ -29,7 +29,7 @@ #define CONFIG_SYS_MEMTEST_END (MEM_BASE + PHYS_SDRAM_1_SIZE)
/* Size of malloc() pool */ -#define CONFIG_SYS_MALLOC_LEN (CONFIG_ENV_SIZE + 1 * SZ_1M) +#define CONFIG_SYS_MALLOC_LEN (CONFIG_ENV_SIZE + 32 * SZ_1M)
/* Generic Interrupt Controller Definitions */ #define GICD_BASE (0x801000000000) @@ -57,7 +57,9 @@
#define BOOT_TARGET_DEVICES(func) \ func(USB, usb, 0) \ - func(SCSI, scsi, 0) + func(SCSI, scsi, 0) \ + func(PXE, pxe, na) \ + func(DHCP, dhcp, na)
#include <config_distro_bootcmd.h>
@@ -72,4 +74,9 @@
#define CONFIG_SYS_I2C_SPEED 100000
+#ifdef CONFIG_THUNDERX_VNIC +/* enable board_late_init() to set env such as mac addrs */ +#define CONFIG_BOARD_LATE_INIT +#endif + #endif /* __THUNDERX_81XX_H__ */ diff --git a/net/eth_legacy.c b/net/eth_legacy.c index 2a9caa3509..a556dc4224 100644 --- a/net/eth_legacy.c +++ b/net/eth_legacy.c @@ -238,9 +238,12 @@ int eth_initialize(void) { int num_devices = 0;
+#if !defined(CONFIG_BOARD_EARLY_INIT_R) eth_devices = NULL; eth_current = NULL; eth_common_init(); +#endif + /* * If board-specific initialization exists, call it. * If not, call a CPU-specific one

For boards with soc peripherals via the pci devices some devices require probing during config to be available (ie VNIC on ThunderX SoCs).
Signed-off-by: Tim Harvey tharvey@gateworks.com --- drivers/pci/pci_auto.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/drivers/pci/pci_auto.c b/drivers/pci/pci_auto.c index d7237f6eee..7ab0e06cc7 100644 --- a/drivers/pci/pci_auto.c +++ b/drivers/pci/pci_auto.c @@ -9,6 +9,7 @@
#include <common.h> #include <dm.h> +#include <dm/device-internal.h> #include <errno.h> #include <pci.h>
@@ -128,6 +129,23 @@ void dm_pciauto_setup_device(struct udevice *dev, int bars_num, }
if (!enum_only) { + u16 device, vendor; + + dm_pci_read_config16(dev, PCI_DEVICE_ID, &device); + dm_pci_read_config16(dev, PCI_VENDOR_ID, &vendor); + if ( (vendor == PCI_VENDOR_ID_CAVIUM) && + ((device == PCI_DEVICE_ID_THUNDERX_SMI) || + (device == PCI_DEVICE_ID_THUNDERX_RGX) || + (device == PCI_DEVICE_ID_THUNDERX_BGX) || + (device == PCI_DEVICE_ID_THUNDERX_NIC_PF) || + (device == PCI_DEVICE_ID_THUNDERX_NIC_VF_1) || + (device == PCI_DEVICE_ID_THUNDERX_NIC_VF)) ) + { + debug("Probing 0x%04x:0x%04x %s\n", vendor, device, + dev->name); + device_probe(dev); + } + /* Configure the expansion ROM address */ dm_pci_read_config8(dev, PCI_HEADER_TYPE, &header_type); header_type &= 0x7f;
participants (2)
-
Alexander Graf
-
Tim Harvey