WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2015-Nov-11 11:41:44

horace
Member
Registered: 2014-Jul-16
Posts: 63

Consecutive calls to gen_raw_log_index()

Hello. I am using the python wlan_exp framework for extracting log entries. I have a problem with consecutive calls to the gen_raw_log_index() function. The flow of the program is similar to log_capture_continuous:

1. Configure the node
2. Trigger an action which writes log entries on the warp
3. Collect log entries and perform processing
4. Repeat

On the first iteration it works. On the second iteration it errors with:
ERROR: Log file didn't start with valid entry header (offset 0)!

My code is:

Code:

for i in range(0, 10):
   #Trigger action on warp which writes log entries

   #Get log entries and select index corresponding to my action
   buffer = node.log_get_all_new(log_tail_pad=500)
   data = buffer.get_bytes()
   index = log_util.gen_raw_log_index(data)
   filtered = log_util.filter_log_index(index, include_only=['MY_ACTION'])

   #Get the actual data
   actual_data = log_util.log_data_to_np_arrays(data, filtered)

   #Do something with the collected data and repeat.

I've checked out gen_raw_log_index() where the error occurs and see the offending 'offset' variable.

How does this function maintain state between calls? i.e. The function log_get_all_new() appears to remember where it finished reading, but then why does gen_raw_log_index() complain that there is no valid entry header? There won't be a valid entry header since it's part way through reading the log... Or is this totally wrong?!

Any ideas would be really helpful

Offline

 

#2 2015-Nov-11 12:06:32

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

There won't be a valid entry header since it's part way through reading the log...

Correct - this is the source of the error you're seeing.

The logging code at the node maintains minimal state. The application requests space for a log entry of a given size. The logging framework returns a pointer to a DRAM location, then advances its internal pointer by the requested size. The logging code does not maintain an index of already-written entries - it does not know where previous entries start/stop.

When wlan_exp reads log data it specifies the range of bytes it needs. The node's log code returns those bytes. This handshake treats the entire log memory section as a binary blob. Neither environment operates on log entry boundaries when transferring log data over-the-wire.

Further, the node does not maintain state about what log data has been read by a given wlan_exp master. This was a conscious design choice, enabling multiple Python masters to retrieve the same log data without any extra bookkeeping at the node. node.log_get_all_new() uses a property in the wlan_exp node object to record the ranges of log data which have already been retrieved. Multiple calls to log_get_all_new() are safe as long as the wlan_exp node object and the actual node are not reset.

The gen_raw_log_index() utility requires a log data byte array that starts at the first byte of a log entry. It uses only the log entry headers to build its index and to jump ahead to each log entry. The gen_raw_log_index() code does not parse each entry's contents. Again this was intentional - you can run gen_raw_log_index() for arbitrary log data arrays without knowledge of each log entry's format.

All together, this means you should run log_get_all_new() as many times as you need to gather raw log data, then run gen_raw_log_index() at the end of the retrieval. If your experiment requires parsing log data while the node is still writing log entries, you should be able to run gen_raw_log_index() multiple times, always referencing the entire log data array.

Offline

 

#3 2015-Nov-11 21:50:35

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

On re-reading your post I may have misunderstood the objective. Are you trying to run 10 independent trials, hoping to read 10 log files and process them separately? If so you can just reset the log before each trial with node.reset(log=True). This command will reset the node hardware's logging state (the write pointer I mentioned above) and the wlan_exp node object's local log-data-already-read variables. After this reset you can trigger log-writing activity, then run log_get_all_new() knowing the returned data will start at the beginning of the first post-reset log entry.

Offline

 

#4 2015-Nov-13 03:31:28

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Hi, thanks for your responses on this, they cleared up some confusion. I think my solution covers both your answers.
Initially I was trying to gather log data, then run gen_raw_log_index(), then gather more log data and store this in a new location (i.e Not append it to the first set of log data) then run gen_raw_log_index() again. gen_raw_log_index() failed the second time since the log data was missing the first section of the log, does this sound correct?
I have fixed this by changing my script to use node.reset(log=True) in between consecutive reads of the log, which is fine for my application.
Must log data always be written to a hdf5 container (as per log_capture_continuous) or can it simply be stored as a python variable, bytearry or similar?

Offline

 

#5 2015-Nov-13 09:02:57

welsh
Administrator
From: Mango Communications
Registered: 2013-May-15
Posts: 612

Re: Consecutive calls to gen_raw_log_index()

No.  You don't have to write the data to an HDF5 file.  We recommend doing this because if the experiment crashes then you won't lose any information. 

If you look at the log_capture_continuous example, you can see how the commands are used.  The log_get_all_new() command returns a buffer that you can then pull the raw bytes from.  Then you can use the log_data in your script or you can write it to a file like we did in the example. 

There is a lot of information on the log in the WLAN Exp documentation.  Hopefully that helps.

Offline

 

#6 2015-Nov-13 09:07:20

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Initially I was trying to gather log data, then run gen_raw_log_index(), then gather more log data and store this in a new location (i.e Not append it to the first set of log data) then run gen_raw_log_index() again. gen_raw_log_index() failed the second time since the log data was missing the first section of the log, does this sound correct?

Yes, this is the expected behavior, since gen_raw_log_index() requires an input byte array that starts at the first byte of a log entry.

Must log data always be written to a hdf5 container (as per log_capture_continuous) or can it simply be stored as a python variable, bytearry or similar?

Using hdf5 files is not required. The examples use hdf5 for easy storage of raw and processed log data, demonstrating how the log capture and log analysis steps can be safely separated, even run on different machines.

Offline

 

#7 2016-Oct-20 08:14:23

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

This is an old topic but I have encountered the same errors with new work.

I am intending to do long 24hr captures of our environment and logging via wlan_exp simultaneously on three nodes. All parsing of the log data is done retrospectively. I have tried two approaches to this and both have weaknesses and I was wondering if you could help.

1) Similar to log_continuous_capture - We log to one .hdf5 file (per node) for the entire 24 hours.
or
2) Roll the log when the container .get_log_data_size() indicates it is greater than a certain threshold.

Problems:
1) If one of the nodes falls over during the experiment, wlan_exp gives the error: "Error: Max retransmissions without reply from node". Nodes do not crash often with v1.5.3 but occasionally they will hang (perhaps linke to this post). I assume this causes the .hdf5 file to close incorrectly or something because when attempting to parse it, the error: "OSError: Unable to open file (Unable to find a valid file signature)" occurs. Making the large .hdf5 capture useless.

2) If we roll the HDF5LogContainer and .hdf5 file every e.g. 4MB then we don't lose an entire capture when the node dies, but only that one small .hdf5 file. However we are presented with the gnarly problem of parsing a log file which may not have correctly aligned log entries (as described above), meaning .write_log_index() gives the error: "ERROR: Log file didn't start with valid entry header (offset 0)!" when attempting to create the index for HDF5LogContainer which does not start with a log entry but perhaps in the middle of an entry.

Solutions:
Can we roll a log file without writing the log index to it - so each roll is simply a blob of contiguous data. Then when parsing, combine all rolls in memory one after the other, then use .write_log_index() to write the index of the whole thing (lots of memory required)?

I hope the problem and our descriptions make sense.

Last edited by horace (2016-Oct-20 08:14:52)

Offline

 

#8 2016-Oct-20 12:27:38

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Can we roll a log file without writing the log index to it - so each roll is simply a blob of contiguous data. Then when parsing, combine all rolls in memory one after the other, then use .write_log_index() to write the index of the whole thing (lots of memory required)?

I think this is a good approach. The wlan_exp HDF5LogContainer class provides a thin wrapper over raw HDF5 files, and the h5py package provides a numpy-compatible interface to HDF5 containers. Re-assembling a group of log data segments from individual HDF5 files into a valid log data file is pretty easy.

Here's a quick proof-of-concept:

Script #1: create multiple HDF5 files each containing 4MB of log data. This script starts from a valid log file and creates 11 new HDF5 files. Your actual experiment would replace this with a script that repatedly reads data from a node and writes a new file. The important point here is that the segment files are not valid "log file" - they are just HDF5 contrainers for a 4MB binary blob.

Code:

import numpy as np

import wlan_exp.log.util as log_util
import wlan_exp.log.util_hdf as hdf_util

LOGFILE_ORIG = 'ap_two_node_two_flow_capture.hdf5'
SEG_SIZE = 4 * 2**20 #4MB

# Open the original file
h5_file_orig = hdf_util.hdf5_open_file(LOGFILE_ORIG, readonly=True)

# Access the HDF5 dataset named 'log_data'
log_data_ds = h5_file_orig['log_data']
log_data_len = len(log_data_ds)

# Iterate over the log data, writing one new HDF5 file per segment 
idx = 0
seg_index = 0
while seg_index <= log_data_len:
    if (seg_index + SEG_SIZE) < log_data_len:
        # Write full segment to new HDF5 file
        d = log_data_ds[seg_index : seg_index + SEG_SIZE]
    else:
        # Write final partial segment
        d = log_data_ds[seg_index : log_data_len]

    # Create empty HDF5 file
    h5_file_seg = hdf_util.hdf5_open_file('log_data_seg_{0:02}.hdf5'.format(idx))

    # Create and write the 'log_data_seg' dataset
    h5_file_seg['log_data_seg'] = d
    h5_file_seg.flush()
    h5_file_seg.close()

    idx +=1
    seg_index += SEG_SIZE

Script #2: create a new empty log data file, read the 11 partial log data files, appending each segment to the output log file

Code:

import numpy as np

import wlan_exp.log.util as log_util
import wlan_exp.log.util_hdf as hdf_util

LOGFILE_OUT = 'log_data_stitched.hdf5'
SEG_FILES = sorted(['log_data_seg_{0:02}.hdf5'.format(i) for i in range(0,11)])

# Create the new empty log file using the HDF5LogContainer wrapper
h5_file = hdf_util.hdf5_open_file(LOGFILE_OUT, readonly=False)
log_file_out = hdf_util.HDF5LogContainer(h5_file)

for f in SEG_FILES:
    # Open the segment file
    h5_file_seg = hdf_util.hdf5_open_file(f, readonly=True)

    # Access the 'log_data_seg' dataset
    d = h5_file_seg['log_data_seg']
    log_file_out.write_log_data(d[:])

# Write a log data index to the new log data file
log_file_out.write_log_index()

I quickly tested this with one of our sample data files, successfully comparing the output of log_process_summary.py for the original and re-assembled file.

Offline

 

#9 2016-Oct-20 12:32:14

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Nodes do not crash often with v1.5.3 but occasionally they will hang

Any more details you can provide about these hangs would be great. There are a few places in CPU Low where the code blocks until certain MAC/PHY states are observed. We've added xil_printf() calls to these blocking loops to help isolate which ones are responsible for hangs. These prints require switching the UART output to CPU Low (right-most DIP switch down). We have not observed any "CPU Low stuck forever" hangs in v1.5.3. If you're seeing this, it would be very helpful to know which loop is responsible.

Offline

 

#10 2016-Oct-21 03:35:22

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Details on crash: This morning one had crashed overnight, resulting in the Python error "Did not provide any log data". I didn't have UART plugged in but doing so now, CPU LOW reports: "Stuck waiting for MAC_HW_LASTBYTE_ADDR1: wlan_mac_get_last_byte_index() = 0". Repeatedly printed to UART.
I will rerun with UART attached but with three nodes it's tricky to know which one will fall over and when. Typically in my environment the node on channel 1 dies first, but this time it was channel 6.

Offline

 

#11 2016-Oct-21 04:25:16

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Thanks for the 'chunking' code. I managed a similar thing yesterday with:

Code:

# Store for all captured data
tmp_h5_file = hdf_util.hdf5_open_file(PATH+'/tmp_'+str(channel)+'.hdf5')
log_container = hdf_util.HDF5LogContainer(file_handle=tmp_h5_file)
    
# We must concatenate all the data files to produce one big blob
for i in range(0,len(files)):
    #Debug
    print("Data from {0}".format(i))
    # Get the actual data from file
    input_file = folder+'/'+FILE_PREFIX+str(i)+'.hdf5'    
    # Get the log_data from the file
    all_data = bytearray(hdf_util.hdf5_to_log_data(filename=input_file))
    # Put in container
    log_container.write_log_data(all_data, True)
        
# Get the raw_log_index from the file
raw_log_index = log_container.get_log_index(gen_index=True)

It took a long time to realise all_data = bytearray(hdf_util.hdf5_to_log_data(filename=input_file)) must have the bytearray() cast. Without this I get the error: "TypeError: expected an object with a writable buffer interface".
I don't really understand this since hdf5_to_log_data() > get_log_data() returns bytes(), which should be appended with log_container.write_log_data() no problem...

Is there anyway to create a LogContainer without requiring an HDF5 file? i.e. Load all the chunks and create the index entirely in memory. To save writing the intermediate hdf5 file tmp_h5_file.

Offline

 

#12 2016-Oct-21 08:42:37

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Another hung state when using NOMAC (previous post was using DCF MAC): "Stuck in wlan_mac_hw_rx_finish! 0x00008180"

Offline

 

#13 2016-Oct-21 11:36:15

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Is there anyway to create a LogContainer without requiring an HDF5 file? i.e. Load all the chunks and create the index entirely in memory. To save writing the intermediate hdf5 file tmp_h5_file.

A simpler approach is using numpy arrays. The example below operates on the same 4MB segment files as Script #2 above. It reconstructs a log data array in memory, then calculates the log data index for that array.

Code:

import numpy as np

import wlan_exp.util as util
import wlan_exp.log.util as log_util
import wlan_exp.log.util_hdf as hdf_util

SEG_FILES = sorted(['log_data_seg_{0:02}.hdf5'.format(i) for i in range(0,11)])

# Create a numpy array to hold the full re-assembled log data
np_dt = np.dtype('V1')
log_data_np = np.empty((0,), dtype=np_dt)

for f in SEG_FILES:
    # Open the segment file
    h5_file_seg = hdf_util.hdf5_open_file(f, readonly=True)

    # Access the 'log_data_seg' HDF5 dataset
    d = h5_file_seg['log_data_seg']

    # Append the bytes from this segment to the full log data array
    log_data_np = np.concatenate( (log_data_np, d[:]) )

# Access the raw bytes for the numpy array
log_data = log_data_np.data

# Calculate the log data index
log_data_index = log_util.gen_raw_log_index(log_data)

util.debug_here()

It's good to think of HDF5 files only as on-disk containers for log data. If you want to operate on log data in memory you can (and should) skip routing that data through HDF5 files entirely.

Offline

 

#14 2016-Oct-21 11:40:01

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Another hung state when using NOMAC (previous post was using DCF MAC): "Stuck in wlan_mac_hw_rx_finish! 0x00008180"

Details on crash: This morning one had crashed overnight, resulting in the Python error "Did not provide any log data". I didn't have UART plugged in but doing so now, CPU LOW reports: "Stuck waiting for MAC_HW_LASTBYTE_ADDR1: wlan_mac_get_last_byte_index() = 0". Repeatedly printed to UART.

Thanks for these details. Are you running the stock v1.5.3 design? If not can you describe any customizations you've made to the code/cores?

Offline

 

#15 2016-Oct-21 12:24:13

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Yes, as downloaded from warpproject.org.
"Stuck waiting for MAC_HW_LASTBYTE_ADDR1" was using the w3_802.11_STA_v1.5.3.bin distribution (STA + DCF MAC) as downloaded in the .zip
"Stuck in wlan_mac_hw_rx_finish! 0x00008180" was using a programmed version of STA + NOMAC. No changes to C code, simply opened the SDK Projects and selected STA and NOMAC for programming (via an SD card .bin etc).

Offline

 

#16 2016-Oct-21 12:43:44

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

First, apologies for the crashes. I really thought we had squashed the last bug in MAC/PHY Rx state that manifests like this.

You mention channels 1,6 above. Do you only see this behavior when tuned to 2.4GHz channels? One debug step would be to disable the DSSS receiver logic ("n.enable_dsss(False)" in Python). Unfortunately the "stuck" prints above don't include the status bit that encodes which PHY (DSSS vs OFDM) is holding the RX_PHY_ACTIVE bit.

Offline

 

#17 2016-Oct-21 13:00:23

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

No problem. I assume it's a rogue frame on the channel which upsets the rx phy? It's quite an active environment although I'm starting to think someone is crafting frames to kill my experiment...
I've only ever tried 2.4GHz and although I said ch1 was the most common, it seems to happen across all channels and boards. I'll try 5GHz.
Yes, ok, I've had DSSS enabled so will disable that and let you know.

Offline

 

#18 2016-Oct-25 03:21:59

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Disabling DSSS appears to make no difference. It still hangs, most often with the 8180 error. Any more thoughts on this? Can I add some extra debug lines for you?

Thanks for the 'in memory' Python script for concatenating logs. This works well. However I needed to use np.dtype('u1') rather than np.dtype('V1'). I don't see it should make a difference but numpy complains otherwise. Also added h5_file_seg.close() in the loop.

Offline

 

#19 2016-Oct-25 20:51:17

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

Thanks for the 'in memory' Python script for concatenating logs. This works well. However I needed to use np.dtype('u1') rather than np.dtype('V1'). I don't see it should make a difference but numpy complains otherwise.

Great, glad that worked. What are your Python/numpy versions? I've been testing with 2.7.12 / 1.11.0. I haven't seen the 'V1' vs 'u1' issue before - if you're using a newer numpy it's good for us to know about this.


Disabling DSSS appears to make no difference. It still hangs, most often with the 8180 error. Any more thoughts on this? Can I add some extra debug lines for you?

The code block below will print all the relevant status bits from the MAC core and Rx PHY.

Code:

xil_printf("MAC HW Status: 0x%08x\n", wlan_mac_get_status());
xil_printf("Rx Hdr Params: 0x%08x\n", wlan_mac_get_rx_phy_hdr_params());
xil_printf("Rx PHY Status: 0x%08x\n", Xil_In32(WLAN_RX_STATUS));

Knowing those register values will help me a lot in reproducing this behavior in simulation. Also, please let me know if there's anything unusual about your environment (any non-standard Wi-Fi devices, etc).

In the meantime you might be able to recover from this condition with some C code modifications. Instead of waiting forever printing the "Stuck" message, I think you could reset the Rx PHY (code below), then return from the frame_receive() context. The validity of this depends on your experiment - this could work if your experiment can tolerate some missed Rx events and the resulting backoff/retransmit behavior at the transmitting node while the Rx PHY is stuck.

Code:

# Toggle Rx PHY reset bit
REG_SET_BITS(WLAN_RX_REG_CTRL, WLAN_RX_REG_CTRL_RESET);
REG_CLEAR_BITS(WLAN_RX_REG_CTRL, WLAN_RX_REG_CTRL_RESET);

Offline

 

#20 2016-Oct-26 08:23:34

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

Hello, thanks for this. Here is the status bits from a recent crash. Not one we've seen before: 20100.

Code:

Stuck in wlan_mac_hw_rx_finish! 0x00020100
MAC HW Status: 0x00020100
Rx Hdr Params: 0x0A050085
Rx PHY Status: 0x00000002

Stuck in wlan_mac_hw_rx_finish! 0x00020100
MAC HW Status: 0x00028100
Rx Hdr Params: 0x0A050085
Rx PHY Status: 0x00000002

Stuck in wlan_mac_hw_rx_finish! 0x00020100
MAC HW Status: 0x00020100
Rx Hdr Params: 0x0A050085
Rx PHY Status: 0x00000002

Three repetitions are listed to highlight the *occasional* difference in wlan_mac_get_status() of 0x00028100 vs. 0x00020100. The 28 version appears every ~100 iterations. Clearly one bit changes between the 'Stuck in...' printf and the 'MAC HW ...' printf. I guess it's toggling and the printf sometimes catches it?

Thanks for the RX PHY reset code.

Versions are:
Spyder: 2.3.0rc
Python: 3.4.1 (64bit)
Numpy: 1.8.1

Offline

 

#21 2016-Oct-27 10:30:38

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

The splitting code we have discussed since post 8 does work well, splitting the capture to simple data blobs and then recombining before generating the log index. However it fails on long captures, when the node's log has wrapped.
In Python we use node.log_configure(log_wrap_enable=True) to enable wrapping.

Collection and recombining of the log data appears to work well. But when the index is generated with log_util.gen_raw_log_index(log_data), the experiments framework fails with error: ERROR: Log file didn't start with valid header (offset 1048018768)!

This only happens when the total size of all log 'chunks' is greater than roughly 950MB. Less than this is fine. Probably indicating the error occurs when the node's log wraps. The framework function node.log_get_size(), returns the same value 1048018692 for evermore. It does not increase as you'd expect, or even reset to zero and increase.

There are no changes to the C code (except RX PHY reset as above) and also no changes to the wlan_exp Python framework.

Any ideas why this might happen and how to prevent it?

Offline

 

#22 2016-Oct-27 10:34:29

horace
Member
Registered: 2014-Jul-16
Posts: 63

Re: Consecutive calls to gen_raw_log_index()

I have rerun this experiment using only log_capture_continuous.py to capture a log which should wrap and observe the same problem. When the script ends it calls gen_raw_log_index() to write the index to the hdf5 file.

For captures which have wrapped in the log, it gives an Exception.  ERROR: Log file didn't start with valid entry header (offset 1048018024)!

As above there are no modifications to either the node, python wlan_exp framework or the capture scripts; A clean v1.5.3. Is this usual behaviour?

Offline

 

#23 2016-Oct-29 20:23:35

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

There are two useful conversations interleaved here - let's move the Rx PHY/MAC debugging to new thread.

I'll take a look at the log wrapping behavior next.

Offline

 

#24 2016-Oct-30 14:19:50

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Consecutive calls to gen_raw_log_index()

I'm trying to reproduce any issues with log wrapping in v1.5.3, so far unsuccessfully.

I'm running an AP+NoMAC node with stock v1.5.3 C and wlan_exp.

For quicker testing you can modify EVENT_LOG_SIZE in wlan_mac_high.h (i.e. "#define EVENT_LOG_SIZE (100 << 20)"). I tested both with and without this change.

I then modified log_capture_continuous.py as follows:

Code:

import import wlan_exp.ltg as ltg

LOG_READ_TIME = 5
MAX_LOG_SIZE = 1.1e9 #Stop shortly after log wraps

def run_experiment():
    ...

    # Add the current time to all the nodes
    wlan_exp_util.broadcast_cmd_write_time_to_logs(network_config)

    # Log testing - log full payloads, generate backlogged multicast flow with long payloads, short Tx time
    node.reset(log=True)
    node.log_configure(log_wrap_enable=True)
    node.set_tx_rate_multicast_data(mcs=7, phy_mode='NONHT')
    node.log_configure(log_full_payloads=True)
    node.configure_bss(ssid="AP-Test", channel=6, beacon_interval=None)
    node.ltg_configure(ltg.FlowConfigCBR(dest_addr=0xFFFFFFFFFFFF, payload_length=1400, interval=0), auto_start=True)

   ...

I also found that the default log_capture_continuous requires that I hit enter after the nominal ending condition to actually terminate the script. I think this is due to raw_input() blocking. This script was originally tested in ipython, maybe it behaves differently there. For easier testing I just deleted this code from the script:

Code:

### Removing this code so script can self-terminate with no keyboard input ###

        # See if there is any input from the user
        while not input_done:
            sys.stdout.flush()
            temp = raw_input("")

            if temp is not '':
                user_input = temp.strip()
                user_input = user_input.upper()

                if ((user_input == 'Q') or (user_input == 'QUIT') or (user_input == 'EXIT')):
                    input_done = True
                    exp_done   = True

### END: Removing this code so script can self-terminate ###

This logging+LTG setup will generate ~10MB of log data per second. In my setup this script runs for a while then writes a ~1.1GB log file and generates a valid index for the file.

A few observations:
-The node.get_log_size() method returns two values:
   -Log capacity: the number of bytes reserved in DRAM for log data data memory (1048018944 bytes in the v1.5.3 reference code)
   -Log size: the number of bytes of valid log data currently contained in the log data memory.

The log size is always <= the capacity. It will be less than the capacity when the last log entry leaves too few bytes at the tail of the memory block to write another entry; the logging code will not wrap a partial entry. Given this definition, the reported log size does not change after the log wraps, because the log memory area still contains that many bytes of log data and pre-wrap log data can still be retrieved before it is overwritten.

The values returned by node.get_log_indexes() have more information, including the number of times the node's log has wrapped.

-The node.log_get() method prints a spurious warning when reading log data from the end of the log data memory block: "WARNING:  Trying to get 4294967295 bytes...". This occurs because log_get_all_new() uses 0xFFFFFFFF as a magic value to ask the node to return all valid data from "offset" to the end of the log memory array. I just committed the fix.

Offline

 

Board footer