Help with LiteScope and jtagbone
gsomlo opened this issue · comments
I'm trying to debug the AXI up-converter as pertaining to the RocketChip DMA -> MEM -> LiteDRAM path, and wanted to use LiteScope with jtagbone for that purpose.
First, I applied the following patch to LiteX:
diff --git a/litex/soc/integration/soc.py b/litex/soc/integration/soc.py
index 17fe98f7..41068a29 100644
--- a/litex/soc/integration/soc.py
+++ b/litex/soc/integration/soc.py
@@ -30,6 +30,7 @@ from litex.soc.interconnect import stream
from litex.soc.interconnect import wishbone
from litex.soc.interconnect import axi
+from litescope import LiteScopeAnalyzer
# Helpers ------------------------------------------------------------------------------------------
@@ -1632,6 +1633,7 @@ class LiteXSoC(SoC):
port = port,
base_address = self.bus.regions["main_ram"].origin
)
+ self.dut_axi = mem_bus
# UpConvert.
elif data_width_ratio > 1:
axi_port = axi.AXIInterface(
@@ -1647,6 +1649,7 @@ class LiteXSoC(SoC):
port = port,
base_address = self.bus.regions["main_ram"].origin
)
+ self.dut_axi = axi_port
# DownConvert. FIXME: Pass through Wishbone for now, create/use native AXI converter.
else:
mem_wb = wishbone.Interface(
@@ -1677,6 +1680,18 @@ class LiteXSoC(SoC):
else:
raise NotImplementedError
+ # set up LiteScope:
+ analyzer_signals = [
+ self.cpu.l2fb_axi.aw, self.cpu.l2fb_axi.w, self.cpu.l2fb_axi.b,
+ self.cpu.l2fb_axi.ar, self.cpu.l2fb_axi.r,
+ self.dut_axi.aw, self.dut_axi.w, self.dut_axi.b,
+ self.dut_axi.ar, self.dut_axi.r,
+ ]
+ self.submodules.analyzer = LiteScopeAnalyzer(analyzer_signals,
+ depth = 512,
+ clock_domain = "sys",
+ csr_csv = "analyzer.csv")
+
# Connect Main bus to LiteDRAM (with optional L2 Cache) ------------------------------------
connect_main_bus_to_dram = (
# No memory buses.
which is to say, I want to look at the (narrow) DMA port (l2fb_axi
) and the (wide) LiteDRAM port, either 1:1 (if rocket is instantiated with a mem port matching the width of LiteDRAM), or after up-conversion (if rocket's mem port is narrower than LiteDRAM).
I then built bitstream for the nexys_video
like so:
$ litex-boards/litex_boards/targets/digilent_nexys_video.py --build \
--cpu-type rocket --cpu-variant linux \
--cpu-num-cores 4 --cpu-mem-width 2 --sys-clk-freq 50e6 \
--with-ethernet --with-sdcard --with-sata --sata-gen 1 \
--with-jtagbone --csr-csv ./csr.csv
After programming the board with the resulting bitstream, I started litex_server
with the following command line (and output):
$ litex_server --jtag --jtag-config ./openocd_nexys_video.cfg
Open On-Chip Debugger 0.12.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
DEPRECATED! use 'adapter driver' not 'interface'
DEPRECATED! use 'ftdi vid_pid' not 'ftdi_vid_pid'
DEPRECATED! use 'ftdi channel' not 'ftdi_channel'
DEPRECATED! use 'ftdi layout_init' not 'ftdi_layout_init'
Info : auto-selecting first available session transport "jtag". To override use 'transport select <transport>'.
DEPRECATED! use 'adapter speed' not 'adapter_khz'
fpga_program
jtagstream_serve
Info : ftdi: if you experience problems at higher adapter clocks, try the command "ftdi tdo_sample_edge falling"
Info : clock speed 25000 kHz
Info : JTAG tap: xc7.tap tap/device found: 0x13636093 (mfg: 0x049 (Xilinx), part: 0x3636, ver: 0x1)
[CommUART] port: JTAG / tcp port: 1234
litex/tools/litex_server.py:175: DeprecationWarning: setDaemon() is deprecated, set the daemon attribute instead
Connected with 127.0.0.1:49400
Note that Connected with 127.0.0.1:49400
showed up after launching litescope_cli
as shown below:
$ litescope_cli --csv ./analyzer.csv --csr-csv ./csr.csv \
-r main_basesoc_rocket_l2fb_axi_aw_valid
I then issued the sataboot
command to the LiteX bios via the serial console, and got the following output from litescope_cli
:
Exact: main_basesoc_rocket_l2fb_axi_aw_valid
Rising edge: main_basesoc_rocket_l2fb_axi_aw_valid
[running]...
[uploading]...
[===> ] 15%
Traceback (most recent call last):
File "litescope/software/litescope_cli.py", line 210, in <module>
File "litescope/software/litescope_cli.py", line 206, in main
File "litescope/software/litescope_cli.py", line 99, in run_batch
File "litescope/software/driver/analyzer.py", line 164, in upload
File "litex/tools/litex_client.py", line 88, in read
File "litex/tools/remote/etherbone.py", line 406, in receive_packet
TimeoutError: timed out
[934851] Failed to execute script 'litescope_cli' due to unhandled exception!
Am I doing something wrong or missing some necessary step in the process?
(@enjoy-digital -- any feedback and/or advice much appreciated!)
There were some jtagbone performance improvements recently merged from: #1433: are your changes on top of the master version with those updates?
I've had similar issues (timeouts) with etherbone but haven't spent much time investing.
Update: if I dial down the depth = 512
from LiteScopeAnalyzer()
to depth = 64
, the *.vcd
dump is uploaded successfully via litescope_cli
. So this appears to be related to the size of the data sample that is (attempted to be) captured perhaps exceeding some limit and erroring out less than gracefully as a consequence?
Good to hear. The problem is an occasional packet error and a smaller buffer requires less packets.
I found the behavior can also be improved by capturing leas signals: eg reduce the buffer width instead of depth.
I'm going to close this for now, as it seems to be mostly working when the data width x depth
capture size is kept below "reasonable" limits...