WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2016-Jan-22 05:31:14

horace
Member
Registered: 2014-Jul-16
Posts: 63

BRAM logic interface

Hello, I have some questions about a BRAM interface created via sysgen.

I have added an additional BRAM (similar to the RX/TX pkt buffers) to the XPS design by copying blocks in the .mhs and altering addresses etc.

I have studied the interface in wlan_phy_rx_pmd > FCS & Pkt Buf > Pkt Buf Interface > BRAM IF 64b and used this (with different address generation logic) in a new sysgen model.

How do the port names in wlan_phy_rx_pmd > FCS & Pkt Buf > Pkt Buf Interface > BRAM IF 64b > BRAM I/O (such as BRAM_Dout etc.) translate to simply 'PORTB' in the .mhs? It seems to work but I can't see how. Something to do with the BRAM_* naming?

My connections are:
mb_shared_axi <--> my_new_model --> new_bram <--> new_bram_ctrl <--> mb_shared_axi
Which should allow the reading of registers in my_new_model from cpu_high and cpu_low. Also should allow writing to new_bram from my_new_model (in logic) and reading of new_bram from both cpu_high and cpu_low. Is this sensible?

Do I need to worry about mutexes for reading registers and the bram?

There are unmapped addresses in v1.4, axi2axi_eth_a_dma, does this matter?

Offline

 

#2 2016-Jan-22 08:54:31

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: BRAM logic interface

How do the port names in wlan_phy_rx_pmd > FCS & Pkt Buf > Pkt Buf Interface > BRAM IF 64b > BRAM I/O (such as BRAM_Dout etc.) translate to simply 'PORTB' in the .mhs? It seems to work but I can't see how. Something to do with the BRAM_* naming?

Look in the System Generator -> Settings -> Bus Interface dialog.

The EDK tools support grouping pcore PORTs into busses with the "BUS_INTERFACE" and "BUS" keywords in the core's MPD. Each PORT associated with a bus has the "BUS = bus_spec_name" keyword and a default net name for the port connection. Other cores which support the same bus interface use the same BUS_INTERFACE name and default net names. This allows XPS to connect all ports in a bus via the single bus connection.

Sysgen supports grouping gateways into busses in the generated MPD via the Custom Bus Interface dialog box. Here you create a BUS_INTERFACE (top half), associate gateways with the bus and set the default net names for each gateway (bottom half). You can copy the packet buffer bus interface names for a new BRAM interface. For net names of other BUS_INTERFACEs you can look at the MPD of another pcore implementing that bus.

Offline

 

#3 2016-Jan-22 08:57:02

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: BRAM logic interface

mb_shared_axi <--> my_new_model --> new_bram <--> new_bram_ctrl <--> mb_shared_axi
Which should allow the reading of registers in my_new_model from cpu_high and cpu_low. Also should allow writing to new_bram from my_new_model (in logic) and reading of new_bram from both cpu_high and cpu_low. Is this sensible?

Do I need to worry about mutexes for reading registers and the bram?

That should work. The v1.4 802.11 design puts the Sysgen-built wlan_mac_time_hw core on the shared interconnect without a problem. Adding the bram shouldn't be an issue. The interconnect will arbitrate among bus masters (CPUs, DMAs) if there is contention for memory accesses.

Offline

 

#4 2016-Jan-22 11:02:23

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: BRAM logic interface

Look in the System Generator -> Settings -> Bus Interface dialog.

Brilliant, thanks, this is exactly the information I needed

Offline

 

#5 2016-Jan-22 11:34:51

welsh
Administrator
From: Mango Communications
Registered: 2013-May-15
Posts: 612

Re: BRAM logic interface

Do I need to worry about mutexes for reading registers and the bram?

You only need mutexes if you need to write to the BRAM.  This is why we need the mutexes for the packet buffers.  You don't want a CPU modifying the state of the buffer if it should not.  One thing I would suggest is to make sure the CDMA master is connected to the new BRAM so that you can perform CDMA operations.  This can really speed up data transfers if trying to move data to DDR as part of the log.

There are unmapped addresses in v1.4, axi2axi_eth_a_dma, does this matter?

No.  If you look at the v1.4 FPGA architecture, we made both Ethernet modules equivalent (i.e. we move the Ethernet B from an AXI fifo to an AXI DMA and moved it to the mb_shared_axi interconnect, see v1.3 FPGA architecture for differences).  This was so that we could support jumbo Ethernet frames in WLAN Exp.  However, to minimize the mb_shared_axi complexity, we collapsed the three master interfaces of the AXI DMA into a single master interface on the mb_shared_axi (this is the Ethernet interconnect) since that does not decrease the performance for this application.  Since the AXI-to-AXI bridges that are used for this are connected to mb_shared_axi, they show up in the Addresses tab, but since both are master interfaces into mb_shared_axi, they do not have write addresses by either CPU.  Therefore, you can just leave them unmapped.

Offline

 

#6 2016-Jan-25 08:45:03

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: BRAM logic interface

Hi, thanks for the further clarification. I did indeed also add the CDMA as a master to the BRAM (after studying the v1.4 architecture). I've used wlan_mac_high_cdma_start_transfer() and it seems to work well.

My next problem is how to read back variable length log entries in python via wlan_exp. In cpu_high I request a new log entry (with new ENTRY_TYPE_MY_NEW) of a specific size. I then use CDMA to copy data from the new BRAM to the new log entry. I notice in other examples (such as wlan_exp_log_create_rx_entry()) you cast to a rx_common_entry struct. My data is simply a contiguous set of u32's hence the CDMA memcpy.

But without a defined structure, how can I extract the raw bytes in python? After pulling the log from the warp, I do something like:

Code:

log_index = log_util.filter_log_index(raw_log_index, include_only=['NODE_INFO', 'TIME_INFO', 'RX_OFDM', 'ENTRY_TYPE_MY_NEW''])
log_np = log_util.log_data_to_np_arrays(log_data, log_index)
my_stats = log_np['ENTRY_TYPE_MY_NEW']
for stat in my_stats:
    //Do something clever here...

Is there a way to access the underlying bytes (ideally as u32's) from each variable length log entry of type ENTRY_TYPE_MY_NEW?
Is there a concern over byte ordering?

I can't see any examples of where you do this in existing code. The closest example seems to be entry_types.py > np_array_add_fields() but this only accesses the first 24 bytes?

Offline

 

#7 2016-Jan-25 10:22:36

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: BRAM logic interface

Accessing variable-length entries in log data will require a different processing flow than our Numpy-based examples.

There's an important distinction between log data retrieval (implemented in node.py) and log data processing (implemented in log/*py). The log data retrieval code will read arbitrary log data from the node's DRAM and store it as blobs of binary data in Python. This process makes no assumptions about the contents of the log data (i.e. it might represent a slice of the overall log, one giant log entry, thousands of tiny entries, a mix of big/small entries, etc).

The log processing code in log/*py is one way to process the binary log data retrieved by node.py. The index generation step (gen_raw_log_index) is nearly universal- it only assumes that the log_data argument starts at the first byte of a log entry, and that each entry's header conforms to the entry_header definition. The gen_raw_log_index() method does not require each entry of a given type be the same size- it uses the entry_length value in each entry's header to stride the log data.

The second step in our examples uses numpy to create structured arrays for each log entry type. Numpy is great for processing structured data like this. It's also fast. The generate_numpy_array() method uses numpy's fromiter array creation routine. Letting numpy's internals handle the iteration over raw data is *way* faster than using a loop in Python. But one big restriction from using numpy structured arrays is that each entry in an array must have the same datatype, and a given datatype has a fixed size. This is why we define TX_LOW_LTG/RX_OFDM_LTG vs TX_LOW/RX_OFDM, so the _LTG entries can access the extra 20 bytes of MAC payload containing the LTG flow ID and uniq_seq values.

For your application, where each instance of ENTRY_TYPE_MY_NEW has a different length, I can think of two options:

1) Add a 'payload_len' field to your new log entry, then define the Python entry_type with a payload size equal to the largest size of any ENTRY_TYPE_MY_NEW payload (let's call that M). Then when you run  generate_numpy_array(), each entry in the output array will have a payload field of M bytes. The first payload_len bytes of M will be valid; the bytes past payload_len will be bogus (they'll actually be the start of the next log entry).

2) Drop numpy entirely, and parse the log data with pure Python. One approach:
-Run gen_raw_log_index, extract the indexes of the ENTRY_TYPE_MY_NEW entries
-Iterate over each ENTRY_TYPE_MY_NEW entry using Python struct.unpack to read the entry header and ENTRY_TYPE_MY_NEW payload into a Python list. Python is happy to construct lists of heterogenous data types. This will be slow vs. numpy, only a problem if your data has many ENTRY_TYPE_MY_NEW entries.

Offline

 

Board footer