You are not logged in.
Thanks welsh for the explanation.
I have few more querries.
1. In the log_capture_two_node.py script, I also included the get_txrx_stats(ap) method to calculate the Goodput between two nodes. For MCS=7 and Rx power around -78dBm, the TP was 0.0Mbps.Goodput and stats_txrx_at_STA
When I extracted the RX_OFDM_LTG information with payload and timestamp information, I saw that there are 16 successful non duplicate packet at Rx. Rx_Packets.
For non duplicate packet,
At STA, rx_sta_idx = (rx_sta['addr2'] == addr_ap) & ((rx_sta['flags'] & 0x1) == 0) rx_sta_from_ap = rx_sta[rx_sta_idx]
Using the throughput_vs_time.py script also the result was : TPvsTime.
But observation of TX_LOW_LTG information showed that there we no successful transmission with ACK=0 for all pkt transmission
=> I couldn't understand why goodput calculation using stats_txrx_at_STA didn't consider any Bytes while in RX_OFDM_LTG logs there is 16 non duplicate packets and throughput_vs_time graph is also showing some traffic.
2. I came across the above queries while looking at the time stamp information to get the propagation delay and delay between success rx as well as tx. What additional event log information can be made useful to extract delay between successive receive packet, successive tx packet, prop delay betn tx->rx packet excluding re transmission?
Offline
If you look a where packets are logged vs where statistics are logged, you can see that only "FCS_GOOD" packets are logged for the statistics, while packets are logged regardless of their FCS status.
The issue you are running into is that you are trying to compare two different definitions of packets. Statistics are computed on "all FCS_GOOD packets" while your definition you are using is "all non-duplicate packets". These are different sets and this is why you are seeing the discrepancy.
Unfortunately, this arises from the fact that we changed the definition we used to filter packets in the log_process_throughput_vs_time.py example. You can see in version 0.96, we used the definition:
rx_ap_idx = (rx_ap['addr2'] == addr_sta) & ((rx_ap['flags'] & 0x1) == 0)
However, in version 1.3, we use the definition:
rx_ap_idx = (rx_ap['addr2'] == addr_sta) & ((rx_ap['flags'] & 0x1) == 0) & (rx_ap['fcs_result'] == 0) & ( (rx_ap['pkt_type'] == 2) | (rx_ap['pkt_type'] == 3) )
As you can see, the second definition matches the definition used to update the data portion of the statistics structure (i.e. non-duplicate, FCS_GOOD, data packets).
Given that you also didn't see any ACKs, I'm pretty sure none of the 16 packets had a good FCS result.
2. I came across the above queries while looking at the time stamp information to get the propagation delay and delay between success rx as well as tx. What additional event log information can be made useful to extract delay between successive receive packet, successive tx packet, prop delay betn tx->rx packet excluding re transmission?
Unfortunately, I don't quite remember everything that was improved in terms of the timestamping of various events between v0.96 and v1.3. Chris will be back in the office next week and might have some further insights. The main issue is that the nodes have to have the same timebase if you want to try to synchronize events between the nodes. There are a number of commands that can be used to help keep things in sync. First, you can see that on a beacon reception, the STA will update its timebase (note: this link is to the v0.96 STA). Now, there are some WLAN Exp Utilizes that can help manage time on multiple nodes: broadcast_cmd_set_time and broadcast_cmd_write_time_to_logs. You want to make sure that your time bases are in sync before starting your experiment. Also, if you look around the log utilities, you can find a few example functions that deal with time.
Hope that helps.
Offline
Hi Team,
I have few more queries regarding the TX, TX_LOW and RX_OFDM log entry information.
1. In TX,
I am particularly interested on time_to_accept and time_to_done field.
From Log_entry_types
time_to_accept= Time duration in microseconds between packet creation and packet acceptance by CPU Low
--> Is it the time duration when all bytes are completely processed at MAC Low?
time_to_done =Time duration in microseconds between packet acceptance by CPU Low and Tx completion in CPU Low
--> Does this represent = Time stamp of last byte successfully sent at Tx - Timestamp when all bytes received at MAC low from MAC High
2. LTG only packets info:
TX Packet created Time_to_accept Time_to_done Mac_seq 41944 23 1473 3868 42008 1439 1555 3869 42072 2937 1600 3870 42136 4480 1590 3871
Tx_LOW Information about only successfully transmitted packet
TX_LOW PHY_Transmit ACK Mac_Seq_No 41976 1 3868 43540 1 3869 45147 1 3870 46744 1 3871
Eq1: Timestamp when all packet is received from MAC High at MAC Low= Timestamp at packet_creation+ Time_to_accept eg. For Mac_seq=3868 , Timestamp_Packet_accepted_at_MAC_Low= 41944+23=41967us
Eq2: Time_to_done= Timestamp_of_last_byte – Timestamp_all_packet_accepted_from_MAChigh_at_MAC_Low+PHY_Time_stamp_start -PHY_Time_stamp_start
=> Is my interpretation for Transmission Delay in Eq3. at PHY layer correct ? [ I have only subtracted the delay_at_MAC_Low from time_to_done]
Eq.3 Tx_packet_complete_transmission_at_PHY=Time_to_done- ( PHY_Time_stamp_Start- Timestamp_all_packet_accepted_from_MAChigh_at_MAC_Low) Eg. Using eq 3. Transmission delay found for 4 packets was: Time_for_Complete_PHY_Tx delay_from_MAC_Low_to_PHY 1464 9 1462 93 1462 138 1462 128
Offline
time_to_accept= Time duration in microseconds between packet creation and packet acceptance by CPU Low
--> Is it the time duration when all bytes are completely processed at MAC Low?
No- the accept time is marked when CPU Low begins processing the packet. This occurs immediately after CPU Low processes the TX_MPDU_READY message from CPU High.
time_to_done =Time duration in microseconds between packet acceptance by CPU Low and Tx completion in CPU Low
--> Does this represent = Time stamp of last byte successfully sent at Tx - Timestamp when all bytes received at MAC low from MAC High
"Done" occurs when CPU Low completes its Tx state machine for the given payload. The accept -> done time encompasses all Tx attempts and all MAC deferrals- every MPDU has one "accept" (high -> low) event and one "done" (low -> high) event, even if the MPDU is re-transmitted many times.
Each packet passed from high to low has a single TX log entry. Think of this entry as TX_HIGH. These entries do not contain timestamps for actual PHY Tx events. Each PHY Tx has a single TX_LOW entry which includes the absolute timestamp of the actual PHY Tx. This timestamp is recorded by the Tx logic to avoid any software jitter. The TX_LOW timestamp reflects the time of the first Tx sample, after any MAC processing or deferrals. The delay between the "accept" time and the first PHY Tx is arbitrary- it could be a few usec if the medium is idle or huge (unbounded, really) if the medium is busy.
Offline
Hi Murpho,
Thanks for the explanation.
1. Just to verify,
time_to_accept(duration)= after this time CPU low starts processing packet time_to_done(duration)= Timestamp(Tx completion in CPU low including retransmission)- Timestamp( CPU starts processing packets)
2. I have drawn message/timing chart for 1st two packets. From definition, t2_done, transmitter is just finished with all successful retransmission. Is it correct that at timestamp of t2_done, Tx is still waiting for the ACK from Rx.
TX [Upper layer MAC] Pkt_crt t2_accept t2_done Payload Pkt_type Mac_seq_no Result 39941 21 436 1528 3 1837 0 40005 400 554 1528 3 1838 0 40069 897 1860 1528 3 1839 0 TX_LOW [Lower layer MAC Tx statistics] Pkt_transmit Payload Flags Pkt_type Mac_seq_No 39971 1528 1 3 1837 40533 1528 1 3 1838 41004 1528 0 3 1839 RX_OFDM [Lower layer MAC Rx statistics] Pkt_received Payload FCS_result Mac_Seq_No 40001 1528 0 1837 40563 1528 0 1838 41035 1528 1 1839
3. Looking at TX_LOW Entry for Mac_seq_No= 1837, Flags says that ACK is received for the packet. But at timestamp= 39971, Transmitter is still waiting for the ACK from receiver and will receive after short time period. Am I correct?
4. Sample Tx_entry and Rx_entry.
I extracted,
successful Tx packet= packets that received ACK
tx_ap_low_ltg_id1=(tx_pkt['flags']==1)&(tx_pkt['pkt_type']==3) tx_pkt_success= tx_pkt[tx_ap_low_ltg_id1]
successful Rx packet= Non duplicate and Error free packet received
rx_id_ltg2=(((rx_pkt['flags']&0x1)==0)& (rx_pkt['fcs_result']==0)&(rx_pkt['pkt_type']==3)) rx_pkt_success=rx_pkt[rx_id_ltg2]
-> I found that no. of successful Tx packets might not be equal to successful Rx packets as Rx packet don't include dropped ACK. For TP vs Time graph, are we only considering the successful received packets at receiver even if its ACK is not received by Transmitter?
Offline
mcccliii wrote:
1. Just to verify,
Code:
time_to_accept(duration)= after this time CPU low starts processing packet time_to_done(duration)= Timestamp(Tx completion in CPU low including retransmission)- Timestamp( CPU starts processing packets)
Yes, that's correct. The values in the Rx Log actually come directly from the tx_frame_info struct inside each Tx packet buffer. Specifically the "delay_accept" and "delay_done" fields in that struct correspond to these two times. "delay_accept" represents how many microseconds after "timestamp_create" did CPU_LOW actually get around to starting to process the frame (so it's basically a queue delay value). "delay_done" represents how many microseconds after the accept the frame is finished transmitting (including all retransmissions).
mcccliii wrote:
2. I have drawn message/timing chart for 1st two packets. From definition, t2_done, transmitter is just finished with all successful retransmission. Is it correct that at timestamp of t2_done, Tx is still waiting for the ACK from Rx.
No, not quite. Here "done" means there is one of three outcomes for the transmitted MPDU:
1. The MPDU received an ACK during its ACK timeout period.
2. The MPDU received some other unrelated frame during its ACK timeout period.
3. The MPDU has waited for an entire ACK timeout and received no response.
The latter two cases are transmission failures. The only way you'll see those on the final retransmission before a "done" event is if CPU_LOW has hit the maximum retry count. So, at t2_done in your figure, the ACK has already been received.
mcccliii wrote:
Code:
TX [Upper layer MAC] Pkt_crt t2_accept t2_done Payload Pkt_type Mac_seq_no Result 39941 21 436 1528 3 1837 0 40005 400 554 1528 3 1838 0 40069 897 1860 1528 3 1839 0 TX_LOW [Lower layer MAC Tx statistics] Pkt_transmit Payload Flags Pkt_type Mac_seq_No 39971 1528 1 3 1837 40533 1528 1 3 1838 41004 1528 0 3 1839 RX_OFDM [Lower layer MAC Rx statistics] Pkt_received Payload FCS_result Mac_Seq_No 40001 1528 0 1837 40563 1528 0 1838 41035 1528 1 18393. Looking at TX_LOW Entry for Mac_seq_No= 1837, Flags says that ACK is received for the packet. But at timestamp= 39971, Transmitter is still waiting for the ACK from receiver and will receive after short time period. Am I correct?
Yes, I think that's correct. According to that chart, MPDU with seq 1837 was "done" at timestamp (39941+436=40377). At that time, the reception of the ACK was complete. 39971 is the TxStart time of the TX_LOW entry for that MPDU. At that point in time, the waveform was just beginning to hit the air. The node had not yet entered the ACK timeout period to wait for the ACK reception. The ACK will be received about (40377-39971)=406 usec later. Most of that time is the time it took to actually send your MPDU and 16 usec of that time was a SIFS interval.
mcccliii wrote:
4. Sample Tx_entry and Rx_entry.
I extracted,
successful Tx packet= packets that received ACKCode:
tx_ap_low_ltg_id1=(tx_pkt['flags']==1)&(tx_pkt['pkt_type']==3) tx_pkt_success= tx_pkt[tx_ap_low_ltg_id1]successful Rx packet= Non duplicate and Error free packet received
Code:
rx_id_ltg2=(((rx_pkt['flags']&0x1)==0)& (rx_pkt['fcs_result']==0)&(rx_pkt['pkt_type']==3)) rx_pkt_success=rx_pkt[rx_id_ltg2]-> I found that no. of successful Tx packets might not be equal to successful Rx packets as Rx packet don't include dropped ACK. For TP vs Time graph, are we only considering the successful received packets at receiver even if its ACK is not received by Transmitter?
You're correct; a dropped ACK can skew Tx-based throughput calculations compared to Rx-based throughput calculations. As a mental exercise, imagine a system where ACKs were never delivered reliably yet the MPDUs being ACKed are always delivered reliably. From the Tx perspective, no packets were ever "successfully" delivered yet from the Rx perspective, every MPDU was delivered (though admitted with lots of duplicates since everything gets retransmitted). That's a pretty contrived example, however. Typically, if an MPDU gets through, its ACK is also very likely to get through in reasonably slow fading channels. This is especially true when you consider that ACKs are coded at the same rate or slower than the MPDUs they are responding to. Regardless, if there is a choice between throughput at a receiver or at a transmitter, you should probably go with the receiver. Our Throughput vs. Time wlan_exp example uses the receive logs only for those throughput calculations.
Offline
Hi team,
Is it possible to shift 802.11g operation by 100MHz i,e in 2.3GHz or 2.5GHz or using Channel 14 i.e 2.484GHz?
Last edited by mcccliii (2015-Jul-20 10:21:13)
Offline
The MAX2829 transceiver is indeed capable of tuning to Channel 14. By default, the 802.11 Reference Design only allows tuning to the universally-allowed channels in the 2.4GHz and 5GHz bands. If you are running an experiment in an environment where you are allowed to use channel 14, you can explicitly enable tuning to that frequency by modifying the wlan_lib_channel_verify() library function that is shared by both CPU_LOW and CPU_HIGH.
We've never tried tuning in the 2.3 or 2.5 GHz bands. The MAX2829 PLL is configured with an integer and fractional divider ratio, set via a few registers. I would suggest looking at the radio_controller driver and MAX2829 datasheet to see if PLL configs outside the usual Wi-Fi channels might be possible.
Offline
Hi Chunter,
We are allowed to operate in CH 14.
1.Will WARP 802.11g be able to operate on CH 14 by just adding case 14: in wlan_lib_channel_verify() or do I need to make further changes?
int wlan_lib_channel_verify (u32 mac_channel){ int return_value; //We allow a subset of 2.4 and 5 GHz channels switch(mac_channel){ //2.4GHz channels case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: case 10: case 11: case 14: //5GHz channels case 36: case 44: case 48: return_value = 0; break; default: return_value = -1; break; } return return_value; }
2. In order to set operating channel in python framework using node.set_channel(CHANNEL), will it be sufficient to just add the the information for Ch 14 to wlan_channel dictionary in wlan_exp.util?
Offline
That should be all that's required, though we have not actually tried this (ch 14 is off limits for the US).
Offline
Thanks Murpho,
I have some more doubt regarding the timestamp information.
I want to expand the DCF packet transmission information from DATA-->ACK for 1 packet.
For MCS= 1(9Mbps) Distance= 1m(AP->STA). I extracted following TX, TX_LOW and RX_OFDM information
== All packet successful transmission From the TX_LOW Entry, I have the timestamp when the 1st bit starts to hit the air. Tx_PHY Ack Mac_seq Uniq_seq [42523 1428 1 1093 1093] [44010 1428 1 1094 1094] [45533 1428 1 1095 1095] [47021 1428 1 1096 1096] [48517 1428 1 1097 1097] [50022 1428 1 1098 1098] [51446 1428 1 1099 1099] From RX_OFDM, the timestamp when 1st bit is received at RX PHY Rx_PHY FCS Mac_seq [42549 1428 0 1093] [44037 1428 0 1094] [45560 1428 0 1095] [47047 1428 0 1096] [48543 1428 0 1097] [50049 1428 0 1098] [51473 1428 0 1099] And the ACK_ Received time calculated from TX entry for each packet : Ack_received= time_pkt_created+ time_accept+time_done from TX entry ACK_received Ack Mac Uniq_seq [43893 0 1093 1093] [45380 0 1094 1094] [46903 0 1095 1095] [48390 0 1096 1096] [49887 0 1097 1097] [51392 0 1098 1098] [52816 0 1099 1099]
From these TX, TX_LOW and RX_OFDM entries, I have extracted following information:
Mac_Seq: [1093, 1094, 1095, 1096, 1097, 1098, 1099] us
Tx_PHY-->Rx_PHY: [26 27 27 26 26 27 27] us
Tx_PHY->ACK received: [1370 1370 1370 1369 1370 1370 1370] us
1. I am conducting experiment at 1m between AP->STA. The propagation delay should be 1/3e2 us=.003us which is negligible.
But Tx_PHY-->Rx_PHY = 26or 27 us. Is it because of the T_PHY_RX_START_DLY 25 ?
2.Folloing 802.11g specification and wlan_mac_low.h following parameters
#define T_DIFS (T_SIFS + 2*T_SLOT)
#define T_EIFS 88
#define T_PHY_RX_START_DLY 25
#define T_TIMEOUT (T_SIFS+T_SLOT+T_PHY_RX_START_DLY)
I break down the time between DATA-->ACK for 1st packet (Mac_seq=1093) as follows:
1st packet
payload=1428B (9Mbps) = 1428*8/9=1269us
Tx_PHY-->Rx_PHY= 26us
T_SIFS= 10+6=16us
ACK=14B=14*8/9=12us
Rx_PHY-->Tx_PHY=26us
Tx_PHY-->Tx_ACK= 1269+26+16+12+26=1349us != (t_done-t_phy_start=1370)
Is this the correct way to break down the time duration?
3. What time duration does T_TIMEOUT represent ( 1 Byte ACK- Last Byte transmitted) or (Last Byte ACK- Last Byte transmitted)?
Offline
Hi team,
In addition to above timing queries, I am also unable to make WARP operate in CH 14.
As suggested by Murpho, I made following changes in the code:
Step1. I added switch case for Ch 14 in wlan_lib_channel_verify()
int wlan_lib_channel_verify (u32 mac_channel){ int return_value; //We allow a subset of 2.4 and 5 GHz channels switch(mac_channel){ //2.4GHz channels case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: case 10: case 11: case 14: //5GHz channels case 36: case 44: case 48: return_value = 0; break; default: return_value = -1; break; } return return_value; }
step2. I also changed the default channel to 14 in wlan_mac_ap.c and wlan_mac_sta.c
#define WLAN_DEFAULT_CHANNEL 14
step 3: Inside wlan_channel dict present in wlan_exp.util I added the paramenters for Ch 14
wlan_channel = [{'index' : 1, 'channel' : 1, 'freq': 2412, 'desc' : '2.4 GHz Band'}, {'index' : 2, 'channel' : 2, 'freq': 2417, 'desc' : '2.4 GHz Band'}, {'index' : 3, 'channel' : 3, 'freq': 2422, 'desc' : '2.4 GHz Band'}, {'index' : 4, 'channel' : 4, 'freq': 2427, 'desc' : '2.4 GHz Band'}, {'index' : 5, 'channel' : 5, 'freq': 2432, 'desc' : '2.4 GHz Band'}, {'index' : 6, 'channel' : 6, 'freq': 2437, 'desc' : '2.4 GHz Band'}, {'index' : 7, 'channel' : 7, 'freq': 2442, 'desc' : '2.4 GHz Band'}, {'index' : 8, 'channel' : 8, 'freq': 2447, 'desc' : '2.4 GHz Band'}, {'index' : 9, 'channel' : 9, 'freq': 2452, 'desc' : '2.4 GHz Band'}, {'index' : 10, 'channel' : 10, 'freq': 2457, 'desc' : '2.4 GHz Band'}, {'index' : 11, 'channel' : 11, 'freq': 2462, 'desc' : '2.4 GHz Band'}, {'index' : 14, 'channel' : 14, 'freq': 2484, 'desc' : '2.4 GHz Band'}, {'index' : 36, 'channel' : 36, 'freq': 5180, 'desc' : '5 GHz Band'}, {'index' : 44, 'channel' : 44, 'freq': 5220, 'desc' : '5 GHz Band'}, {'index' : 48, 'channel' : 48, 'freq': 5240, 'desc' : '5 GHz Band'}]
Step 4: I connected the AP directly to spectrum analyzer to see if there waveform in seen in CH 14. Tx_gain= 0dB
a. When AP and STA is powered on, STA does not associate to AP. Only after changing to Ch(1..11) STA is associated with AP.
b. When I set the channel to 14 in python script and observed the waveform in spectrum analyzer, there is no activity in 2.484GHz. The activity is seen in center frequency that was set earlier.
Do I need to make further changes in the code to make WARP operate in Ch 14?
Offline
Regarding the channel 14 question, the failure to associate at the STA is expected. The WLAN_DEFAULT_CHANNEL is really only a starting point for the STA since it boots up and performs an active scan by looping over available channels and sending probe requests to look for the WARP-AP. Even though you have allowed channel 14 with your earlier changes, the STA itself isn't trying to tune to channel 14 during its active scan so it doesn't see the AP.
You can change this behavior in C by modifying the channel_selections array at the top of wlan_mac_sta.c If you make that change, are you then able to see the boards associate? If so, can you then see if there is any activity at 2.484?
Offline
Looking thru the code, there is one other place you need to change: wlan_mac_low_wlan_chan_to_rc_chan() in the MAC Low Framework. You can see how this is used in the IPC_MBOX_CONFIG_CHANNEL case of the IPC to CPU low. By only modifying the verify function, you avoid the printf for invalid channel but don't actually set the channel correctly.
Offline
Responding to the timestamp question:
I think the issue you are running into is that the 'time_to_done' field in the TX log entry is not quite measuring what you want. If you look in the transmit state machine, you can see that the 'time_to_done' (i.e. the 'delay_done' field of the tx_mpdu, line 718) is set after the entire TX state machine has run. This includes some additional software overhead in addition to the over the air time. As you can see in the DCF, there are a number of things that happen after the reception of an ACK that take a non-negligible amount of time, including the reading of the microsecond counter itself. One thing to note is that the ~20 us discrepancy you are seeing is only ~3200 processor clock cycles (i.e. processor is running at 160 MHz), which is not that much time, especially when interacting with data that is not in local LMB memory.
I'm not quite sure how to measure this SW delay with the default reference design (Chris or Patrick might have some ideas). However, if you need more accurate timestamps for certain events, then the best way to do that is to implement a hardware timestamp, similar to what we did for the TX / RX timestamp values in the WLAN MAC hardware peripheral. That way you do not have to account for SW delays.
Offline
mcccliii wrote:
1. I am conducting experiment at 1m between AP->STA. The propagation delay should be 1/3e2 us=.003us which is negligible.
But Tx_PHY-->Rx_PHY = 26or 27 us. Is it because of the T_PHY_RX_START_DLY 25 ?
The TX_LOW timestamp corresponds to the "TXSTART" event in the TX PHY. This is the moment when the PHY is started -- a few cycles before the first sample hits the DACS. The RX_OFDM timestamp corresponds to the "RXSTART" event in the RX PHY. This *isn't* the first sample. This is after the RX PHY decodes a good SIGNAL field. So the difference in time between TXSTART and RX start is the preamble duration plus the SIGNAL duration plus the SIGNAL decoding latency plus the TX-PHY-to-Antenna latency.
mccliii wrote:
2.Folloing 802.11g specification and wlan_mac_low.h following parameters
#define T_DIFS (T_SIFS + 2*T_SLOT)
#define T_EIFS 88
#define T_PHY_RX_START_DLY 25
#define T_TIMEOUT (T_SIFS+T_SLOT+T_PHY_RX_START_DLY)
I break down the time between DATA-->ACK for 1st packet (Mac_seq=1093) as follows:
payload=1428B (9Mbps) = 1428*8/9=1269us
Tx_PHY-->Rx_PHY= 26us
T_SIFS= 10+6=16us
ACK=14B=14*8/9=12us
Rx_PHY-->Tx_PHY=26us
Tx_PHY-->Tx_ACK= 1269+26+16+12+26=1349us != (t_done-t_phy_start=1370)
Is this the correct way to break down the time duration?
I calculate a duration of 1316 µs for the DATA duration via the following:
TXTIME = TPREAMBLE + TSIGNAL + TSYM × NSYM NSYM_DATA = ceil( 8 * ( 2 + 1428 + 24 + 4 ) / 36 ) = 324 TXTIME_DATA = 16 + 4 + (4 × 324) = 1316 µsec
where the 2 bytes accounts for the service field, the 24 bytes accounts for the 802.11 header, the 4 bytes accounts for the FCS. The 36 is the number of bits per OFDM symbol for the 9 Mbps rate. Suppose the TXSTART for the data occurs at time txStart_data. The TXSTART for the ACK would then occur simply at
txStart_ack = txStart_data + 1316 + 16 txStart_ack = txStart_data + 1332
The RXSTART of that ACK back at the originating node then occurs 26 usec after. But that number doesn't influence the TXTIME of the ACK, that's just when the originating node gets around to seeing it and time stamping it. From the perspective of the DATA Txing node, ACK Txing node finishes its last sample a TXTIME(ACK) after txStart_ack.
TXTIME = TPREAMBLE + TSIGNAL + TSYM × NSYM NSYM_ACK = ceil( 8 * ( 2 + 10 + 4 ) / 24 ) = 6 TXTIME_ACK = 16 + 4 + (4 × 6) = 44 µsec
ACKs aren't sent at 9 Mbps. They are only sent the fastest half-rate code that is lower than the rate of the data transmission (so, 6 Mbps). That's why the ACK TXTIME above assumes 24-bits-per-OFDM-symbol. So, from perspective of the originating node, the last sample of the ACK hits the air at 1332 + 44 = 1376 µsec after the first sample of the data frame. While this is closer to the (t_done-t_phy_start=1370) number you measured, I can't reconcile why it's actually larger. I need to think about this and double check my math -- I'll post another response later if I figure it out.
mccliii wrote:
3. What time duration does T_TIMEOUT represent ( 1 Byte ACK- Last Byte transmitted) or (Last Byte ACK- Last Byte transmitted)?
The timeout doesn't need to encompass the whole ACK reception. The standard is written such than an RXSTART must occur within the timeout window, regardless of how much time it takes to get to an RXEND. 802.11-2012 states in 9.3.2.8 that the ACKTimeout is (aSIFSTime + aSlotTime + aPHY-RX-START-Delay), so that's where our definition comes from.
Offline
Ah, the mistake I made was assuming that your data payload was 1428 bytes and adding an additional 28 to account for the header + FCS. Correcting the math:
TXTIME = TPREAMBLE + TSIGNAL + TSYM × NSYM NSYM_DATA = ceil( 8 * ( 2 + 1400 + 24 + 4 ) / 36 ) = 318 TXTIME_DATA = 16 + 4 + (4 × 318) = 1292 µsec
After the SIFS we've got 1292+16 = 1308 µs. After the TXTIME_ACK, we've got 1308+44 = 1352 µs. Coincidentally, that number is pretty close to your original calculation (1349 µs), but the reasons behind it are pretty different. From here, the difference with the 1370 from the time_to_done measurement is the software overhead that Erik spoke of.
Offline