WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2014-Sep-30 17:26:56

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Query regarding WARPnet example on finding aggregate throughput

I'm using the 'throughput_two_nodes.py' code on a WARPnet setup. While going through the code, I saw that the initial and final Tx/Rx stats for the STA and AP are stored in sta_rx_stats_start, ap_rx_stats_start and sta_rx_stats_end, ap_rx_stats_end. I opened these data structures to see the entries and saw that there is an entry named: "data_num_tx_packets_low". I have the following questions:

1. What does this entry represent? From the name I'm guessing its the number of packets transmitted at the PHY layer. Is that right?

2. When I run the python code for aggregate throughput, when its running for say STA->AP, why is the value of "data_num_tx_packets_low" the same in sta_rx_stats_start and sta_rx_stats_end. Since the STA is transmitting, shouldn't its start and end value be different?

3. If I were to calculate the total number of packets transmitted or received including retransmissions, what entry of  sta_rx_stats_start/ap_rx_stats_start should I be looking at?

4. Can I calculate the #transmitted pkts/#received pkts from the entries of  sta_rx_stats_start/ap_rx_stats_start?

Offline

 

#2 2014-Sep-30 18:31:54

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Query regarding WARPnet example on finding aggregate throughput

I'm using the 'throughput_two_nodes.py' code on a WARPnet setup. While going through the code, I saw that the initial and final Tx/Rx stats for the STA and AP are stored in sta_rx_stats_start, ap_rx_stats_start and sta_rx_stats_end, ap_rx_stats_end. I opened these data structures to see the entries and saw that there is an entry named: "data_num_tx_packets_low". I have the following questions:

The variable names in this script are confusing - they'll be clarified in the next release.

To be sure, I suggest looking where the stats variables are assigned to see what each represents. You'll see calls like "x = node_A.stats_get_txrx(node_B)". In this call "x" contains a stats dictionary retrieved from node_A, containing counts of all Tx/Rx events which occurred at node_A for packets to/from node_B.

1. What does this entry represent? From the name I'm guessing its the number of packets transmitted at the PHY layer. Is that right?

Yes- the tx_*_low fields count PHY transmission events. One MPDU might have many tx_low events due to re-transmissions by the DCF.

The fields in the Tx/Rx stats dictionary are the same as the fields in the Tx/Rx log entry, described in the wlan_exp docs.

2. When I run the python code for aggregate throughput, when its running for say STA->AP, why is the value of "data_num_tx_packets_low" the same in sta_rx_stats_start and sta_rx_stats_end. Since the STA is transmitting, shouldn't its start and end value be different?

3. If I were to calculate the total number of packets transmitted or received including retransmissions, what entry of  sta_rx_stats_start/ap_rx_stats_start should I be looking at?

4. Can I calculate the #transmitted pkts/#received pkts from the entries of  sta_rx_stats_start/ap_rx_stats_start?

Hopefully the comments above clarify the meaning of these fields. The Tx/Rx stats struct counts packets and bytes; it is possible to calculate the ratios you describe here.

Offline

 

#3 2014-Oct-01 14:15:04

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Re: Query regarding WARPnet example on finding aggregate throughput

Many thanks. Your comments were very helpful. I have an additional question:

At the receiver is it possible to calculate the total number of packets received including those received due to re-transmissions. So for instance, if the packet flow is from STA->AP and if the STA does not receive an ACK from the AP for a specific packet and re-transmits it again although it was successfully received at the AP. So in this case, the AP would receive the same packet two times. Would the AP count the number of packets received as 2 or as 1? The reason why I'm asking this questions is because in the Tx/Rx stats, there is a variable for data_num_tx_packets_low but nothing like data_num_rx_packets_low.

Offline

 

#4 2014-Oct-01 14:58:14

chunter
Administrator
From: Mango Communications
Registered: 2006-Aug-24
Posts: 1212

Re: Query regarding WARPnet example on finding aggregate throughput

The value being stored in the statistics is indeed post de-duplication. In your scenario, the number of packets received would be 1. Using the AP as an example, you can see here that the AP punts on the reception if it sees that it was a duplicate. It punts before updating the statistics a few lines below that.

You could adjust the AP and the STA so that receptions are counted prior to punting on duplicates, but I highly recommend moving to the logging framework as a more general solution and just stop using statistics. Everything you can do with statistics, you can alternatively do with the event log and much more. Every reception (including duplicates) is already being logged. If you want to know how many total receptions there were, it's an easy python operation to simply check how many receptions were logged. If you want to know how many de-duplicated receptions there are, it's just as easy to count how many packets were logged that weren't flagged as duplicates. It's also easy to calculate throughput directly from the logs. See this example. for details.

Offline

 

#5 2014-Oct-03 18:48:57

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Re: Query regarding WARPnet example on finding aggregate throughput

Thank you for your reply. I tried to use the logging framework to calculate the throughput and the #pkts transmitted (including retransmissions) and #pkts received (including duplicates). However, I'm facing a problem:

What I'm trying to do is as follows:

1.  Start a flow from AP's LTG to the STA
2.  Stop the flow and purge the remaining transmissions in the queue
3.  Write the logs from AP and STA to a hdf5 file and process the files using log_process_details.py

This is the same as log_capture_two_node_two_flow.py except now there is no flow from STA->AP but only AP->STA

The problem is that the number of packets received at the STA (including duplicates) is more than the number of packets transmitted by the AP (including retransmissions).
And this is happening in every single run.
Following is the data from one of the run:

AP log:
Tx Counts (CPU Low - includes retransmissions):
Dest Addr                # Pkts    # Bytes      MAC Addr Type
40:d8:55:04:23:4c          4765    6785360      Mango WARP Hardware

STA log:
Rx Counts (including duplicates):
Dest Addr                # Pkts    # Bytes      MAC Addr Type
40:d8:55:04:23:7e          4785    6674378      Mango WARP Hardware

Offline

 

#6 2014-Oct-03 21:52:28

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Query regarding WARPnet example on finding aggregate throughput

What filters are you applying to the Tx and Rx counts?

One possibility is your Rx pkt count at the STA includes (broadcast) beacon transmissions by the AP, but the AP Tx count only includes (unicast) data transmissions to the STA.

Offline

 

#7 2014-Oct-06 18:04:49

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Re: Query regarding WARPnet example on finding aggregate throughput

Thank you for your reply. That was the problem.

Offline

 

#8 2014-Oct-07 16:48:08

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Re: Query regarding WARPnet example on finding aggregate throughput

There is an interesting problem that I'm facing while using the log_process_details.py. I have the following experimental setup:

Two WARP boards connected over a programmable attenuator with 30 dB default attenuation. WARP boards are setup for WARPnet.

I'm running the following loop:
Loop 1: Change the attenuation value on the attenuator from {0 to 50} in steps of 5 dB:
Loop 2: Do the following 'x' times:
  Start a flow from AP's LTG to the STA
  Stop the flow and purge the remaining transmissions in the queue
  Write the logs from AP and STA to a hdf5 file and process the files using log_process_details.py

However, after running Loop 1 for 5 times, it gives the following error in the 6th iteration (independent of the value of x: I tried for x=5,10,11,12):

Code:

Log Sizes:
AP  = 16,389,536 bytes
STA = 36,748,980 bytes

ap_two_node_two_flow_capture.hdf5
Error writing log file: Did not provide any log data.


sta_two_node_two_flow_capture.hdf5
Error writing log file: Did not provide any log data.

It does generate the hdf5  files, however I think it does not write anything inside it and may be it is filled up with a bunch of zeros which is why I get the following error when I call log_process_details.py for the two files (go.py is my python script that is calling log_process_details.py and I've put the entire code of log_process_details.py into a function called proc and in my python script I import log_process_details as logg. So the call logg.proc(filename) is similar to passing the hdf5 file to log_process_details.py ):

Code:

Traceback (most recent call last):
  File "go.py", line 230, in <module>
    tx_data=logg.proc(AP_HDF5_FILENAME)
  File "D:\peshal\Python_Reference\examples\txrx_log\new\log_process_details.py"
, line 63, in proc
    log_data      = hdf_util.hdf5_to_log_data(filename=LOGFILE)
  File "D:\peshal\Python_Reference\wlan_exp\log\util_hdf.py", line 636, in hdf5_
to_log_data
    log_data    = container.get_log_data()
  File "D:\peshal\Python_Reference\wlan_exp\log\util_hdf.py", line 331, in get_l
og_data
    ds.read_direct(log_data_np)
  File "D:\peshalPython\Anaconda\lib\site-packages\h5py\_hl\dataset.py", line 60
0, in read_direct
    for mspace in dest_sel.broadcast(source_sel.mshape):
  File "D:\peshalPython\Anaconda\lib\site-packages\h5py\_hl\selections.py", line
 300, in broadcast
    chunks = tuple(x/y for x, y in zip(count, tshape))
  File "D:\peshalPython\Anaconda\lib\site-packages\h5py\_hl\selections.py", line
 300, in <genexpr>
    chunks = tuple(x/y for x, y in zip(count, tshape))
ZeroDivisionError: long division or modulo by zero

I tried to take the two hdf5 files and tried to separately run log_process_details.py on them. And I got the following error message:

Reading log file 'ap_two_node_two_flow_capture.hdf5' (  0.0 MB)

Code:

Traceback (most recent call last):
  File "log_process_details.py", line 271, in <module>
    proc("ap_two_node_two_flow_capture.hdf5")
  File "log_process_details.py", line 63, in proc
    log_data      = hdf_util.hdf5_to_log_data(filename=LOGFILE)
  File "D:\peshal\Python_Reference\wlan_exp\log\util_hdf.py", line 630, in hdf5_
to_log_data
    file_handle = hdf5_open_file(filename, readonly=True)
  File "D:\peshal\Python_Reference\wlan_exp\log\util_hdf.py", line 523, in hdf5_
open_file
    file_handle = h5py.File(filename, mode='r')
  File "D:\peshalPython\Anaconda\lib\site-packages\h5py\_hl\files.py", line 222,
 in __init__
    fid = make_fid(name, mode, userblock_size, fapl)
  File "D:\peshalPython\Anaconda\lib\site-packages\h5py\_hl\files.py", line 79,
in make_fid
    fid = h5f.open(name, h5f.ACC_RDONLY, fapl=fapl)
  File "h5f.pyx", line 71, in h5py.h5f.open (h5py\h5f.c:1817)
IOError: Unable to open file (Truncated file: eof = 1592, sblock->base_addr = 0,
 stored_eoa = 2144)

The interesting thing is that this happens in every 6th iteration of Loop 1. Can you tell me what could possible be going wrong?

Last edited by peshalnayak (2014-Oct-07 16:53:39)

Offline

 

#9 2014-Oct-08 09:21:10

welsh
Administrator
From: Mango Communications
Registered: 2013-May-15
Posts: 612

Re: Query regarding WARPnet example on finding aggregate throughput

You are correct about the message

Error writing log file: Did not provide any log data.

This indicates that the script did not get any information from the board.  If you look at the code that is used to generate this error:

Code:

# Look at the final log sizes for reference
ap_log_size  = n_ap.log_get_size()
sta_log_size = n_sta.log_get_size()

print("\nLog Sizes:  AP  = {0:10,d} bytes".format(ap_log_size))
print("            STA = {0:10,d} bytes".format(sta_log_size))

# Write Log Files for processing by other scripts
print("\nWriting Log Files...")

write_log_file(AP_HDF5_FILENAME, n_ap.log_get_all_new(log_tail_pad=0))
write_log_file(STA_HDF5_FILENAME, n_sta.log_get_all_new(log_tail_pad=0))

It seems like there was no new data in the log for each of the nodes.  This could be due to an error somewhere earlier in the script.  Unfortunately that error does not leave the file in a good state (ie it doesn't close the file handle and may have some other bad state) and we need to fix that. 

One thing you can do is between each of your loops is to reset the log for each node:

Code:

for node in nodes:
    node.reset(log=True)

If this doesn't help, then it would be helpful to post your Loop 2 code.

Offline

 

#10 2014-Oct-10 19:47:58

peshalnayak
Member
Registered: 2014-Sep-30
Posts: 6

Re: Query regarding WARPnet example on finding aggregate throughput

Thank you. It worked!

Offline

 

Board footer