WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2015-Sep-30 11:12:41

wenchao88
Member
Registered: 2015-Aug-22
Posts: 26

Delay the timing of packet transmission

I want to change the timing of data packet transmission. More specifically, I want to change the "frame_transmit" in "wlan_mac_dcf.c". I have already found the place where data packet is first transmitted and retransmitted. I wonder how to delay the transmission of data packet to the timing I want.

Currently, I am using the function "wlan_mac_dcf_hw_start_backoff( slot_number )". But I am not sure

1. How long will the data packet be delayed for 1 slot?

2. Is there any side effect if I use this function to delay packets? Any better way to delay the timing of data packet?

Thanks

Offline

 

#2 2015-Sep-30 15:08:13

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

I am also curious about the timing accuracy of warp. In one thread, the administrator mentioned that the timing accuracy is 64us, and can be further reduced by changing the header file. Does this mean that the backoff can be very accurate?
Thanks

Offline

 

#3 2015-Sep-30 15:57:08

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Delay the timing of packet transmission

You'll need to be much more specific about what you're trying to achieve.

Do you just need to re-define the normal MAC timing values (slot, DIFS, SIFS)?

My how much are you trying to delay a transmission? A few usec? Many msec? Should the eventual transmission be aligned with some other event (i.e. transmit N usec after some other packet is received)?

The function you refer to above starts a standard backoff in the DCF. Backoffs are quantized to slot durations (slot = 9 usec by default). The DCF chooses a random number of slots for each backoff, with the range of the random value controlled by the contention window. More Tx failures increase the contention window, leading to longer backoffs.

I am also curious about the timing accuracy of warp. In one thread, the administrator mentioned that the timing accuracy is 64us, and can be further reduced by changing the header file. Does this mean that the backoff can be very accurate?

I'm not sure what thread you're referring to. You'll need to be more specific about what timing you're asking about. In a standard AP+STA network the STA nodes keep their local timestamps synchronized with the AP via the timestamp field in received beacons. The accuracy of this scheme depends on the beacon interval (102 msec by default), the hardware's oscillator accuracy (~3ppm on WARP v3) and the reliability of beacon reception (depends on the wireless environment).

Offline

 

#4 2015-Sep-30 16:27:53

welsh
Administrator
From: Mango Communications
Registered: 2013-May-15
Posts: 612

Re: Delay the timing of packet transmission

I would suggest that you dig in and really understand the MAC Support HW core and how it interacts with the code that implements the DCF functionality.  There are a number of timing parameters that can be used to allow you to control timing around how transmissions are sent.  However, as murphpo said, you need to be more specific about what you are trying to achieve.

Offline

 

#5 2015-Sep-30 23:33:57

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thanks very much for your reply.
This is the thread-Scheduler precision enhancement in the 802.11 reference design (http://warpproject.org/forums/viewtopic.php?id=2560)
My current experiment is to perturb the transmission timing of the queued data packets on warp. For example, delay the transmission time of one data packet for 1 millisecond. so i am curious about the granularity of the timing.

Offline

 

#6 2015-Oct-01 09:40:36

welsh
Administrator
From: Mango Communications
Registered: 2013-May-15
Posts: 612

Re: Delay the timing of packet transmission

Taking a step back, if you look at the 802.11 architecture, the reference design splits the MAC into two pieces.  The Lower-level MAC handles all medium access (i.e. how / when the packet actually hits the medium, retransmissions, etc.), while the Upper-level MAC handles the inter-packet state behavior (i.e. what to do with received packets, when to send packets to the lower-level MAC to be transmitted, etc.).  In order to handle all the inter-packet behavior, the upper level MAC requires a a scheduler in order to keep track of all the necessary tasks.  This scheduler that is being discussed in the thread you referenced.

If you look at the WLAN Exp LTG documentation, you can see that the node can use the scheduler to set the interval of when packets are added to the wireless transmit queues.  In one of the examples, you can see that the LTG is "fully backlogged", i.e. the interval used to add packets to the transmit queues is 0 seconds which is faster than it is possible to transmit packets over the air.  This means that the transmit queues are always full and packets are transmitted by the node as fast as the medium allows.  You can adjust the interval at which packets are added to the transmit queues, which uses the fine scheduler, but the rate at which the queues are emptied depends on how fast the lower-level MAC can transmit packets.

Now, if you actually need to adjust the how / when the packet hits the medium vs when the packet is put in the transmit queues to be processed by the lower-level MAC, then you can modify the lower-level MAC. Again, I would suggest that you read up on the MAC Support HW core to understand the timings it provides to put packets on the medium.  If you look in the wlan_mac_low_dcf_init() function that initializes the hardware core, you can see that all parameters are specified in units of 100 nanoseconds.  The main thing that you have to be aware of is that you cannot create arbitrarily long delays in the hardware core.  You need to make sure that the value specified can be represented by the bits in the timing field.  For example, if you look at wlan_mac_set_slot(), you can see that the value is limited to 10 bits (i.e. it can be a value between 0 and 1023) which means that you can set a maximum slot time of 102.3 usec (i.e. 1023 * 100 nsec).  All MAC timing parameters in the hardware core have constraints like this and you need to look at the code to make sure you don't exceed the maximum values. 

One other thing to mention is that both CPU High and CPU Low have access to a usleep() function.  This can be used to add arbitrary delays to code but you have to be careful that you don't use it in timing critical places.  For example, the frame_receive() documentation clearly states that this it has timing critical code and should not have large delays added.

Offline

 

#7 2015-Oct-01 22:42:07

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thank you very much for your detailed information. These are very very helpful for me.

Offline

 

#8 2015-Dec-03 22:10:18

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

I am wondering how can I have accurate measurements on the delay in the warp code, and what is the magnitude of this delay.
For example, the exact timing between the time when the packet is put to the queue, and the time when this packet is sent out (if we don't consider delay caused by the CSMA backoff, and only consider the delay introduced by the warp system ).
Thanks.

Offline

 

#9 2015-Dec-03 22:38:51

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Delay the timing of packet transmission

To compute the latency of any code path you can read the microsecond timestamp at any point in the code, record this then compare it to a future timestamp value.

For example, the exact timing between the time when the packet is put to the queue, and the time when this packet is sent out (if we don't consider delay caused by the CSMA backoff, and only consider the delay introduced by the warp system ).

One important point about this- the time a packet spends in a Tx queue is dependent on many other factors. The lower-level MAC and PHY only process one Tx packet at a time. The time for the DCF/PHY to transmit a single packet is unbounded. In theory the DCF can defer a transmission forever (i.e. the medium is always busy). A packet sitting in the Tx queue will not be passed to the lower MAC until the previous Tx has completed. The time from enqueue to dequeue is not fixed.

The reference code already records a few time values that track the latencies in processing each Tx packet. The CPU High code records the enqueue time of each packet. The CPU Low code records the "accept" time, or the delay from enqueue to the packet being passed to the frame_transmit() function in CPU Low. The CPU Low code also records the actual Tx time of every PHY transmission for a given MPDU. These PHY Tx timestamps are recorded in the TX_LOW log entries. Finally, the CPU Low code records the "done" time, reflecting the delay from the accept time to the completion of the frame_transmit() function for a given packet. The enqueue time, delay-to-accept and delay-to-done are recorded in the TX log entries for every packet.

Offline

 

#10 2015-Dec-03 22:54:38

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thanks for the reply. I was just curious about the delay caused by calling these functions, assuming everything is good (packet buffer is empty, medium is empty). I will do experiments to help me.

Offline

 

#11 2015-Dec-04 09:42:50

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Delay the timing of packet transmission

I was just curious about the delay caused by calling these functions, assuming everything is good (packet buffer is empty, medium is empty). I will do experiments to help me.

Ah, I understand your question better. We don't do detailed characterization of the latency of each processing step for each release of the design. However I do have some coarse measurements for the v1.3 release. These are illustrated below.

In these images the green blocks represent the time a payload is on the Ethernet wire, assuming a 1Gbps link. The red blocks represent the best-case latency through the Tx/Rx MAC code. The blue blocks represent the time the wireless packet is on the air. The first image shows the timing for a single packet, with the nodes being idle before and after Tx/Rx. The second image shows the same events for 2 packets back-to-back, again with idle nodes before/after the 2-pkt Tx/Rx.

One very important observation here is that most of the MAC code overhead is pipelined with other events. When the lower-level MAC and PHY are transmitting a packet, the upper-level MAC can begin preparing the next packet for transmission. As a result the effective overhead-per-packet depends on the overall activity level of the node, with the lowest effective overhead occurring when the node is busiest.

https://warpproject.org/dl/images/forum_images/mango_80211_tx_timing_1pkt.png
Full res - 1 Packet

https://warpproject.org/dl/images/forum_images/mango_80211_tx_timing_2pkt.png
Full res - 2 Packets

Offline

 

#12 2015-Dec-04 10:25:04

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thanks so much for your help.
I still have some questions about this figure.
(1) In the figure, the TX mac processing stands for sensing the channel and making sure that the channel is idle?
(2) what is the meaning of the second arrow, the one from the brown chunk to the second blue chunk? I can roughly guess that this is because the upper mac and transmission can work at the same time. Is this correct?
(3) then the interval between two blue chunks is the CSMA result, due to current channel station

Offline

 

#13 2015-Dec-04 10:38:47

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Delay the timing of packet transmission

(1) In the figure, the TX mac processing stands for sensing the channel and making sure that the channel is idle?

The Tx MAC time does not include any carrier sensing time. We took these measurements with an idle channel. This means "idle for DIFS" was true when the lower-level MAC submitted the packet for transmission. Per the DCF if the medium has been idle for >DIFS a node can transmit immediately.

(2) what is the meaning of the second arrow, the one from the brown chunk to the second blue chunk? I can roughly guess that this is because the upper mac and transmission can work at the same time. Is this correct?

The arrow just shows which MAC processing time corresponds to which PHY Tx time. The important thing in the second figure is that the MAC processing for packet #2 occurs while the PHY is processing packet #1.

Offline

 

#14 2015-Dec-04 10:45:09

chunter
Administrator
From: Mango Communications
Registered: 2006-Aug-24
Posts: 1212

Re: Delay the timing of packet transmission

zhimeng wrote:

(1) In the figure, the TX mac processing stands for sensing the channel and making sure that the channel is idle?

More than that, the time represented in the Tx MAC Processing block is code processing the Ethernet reception. The code has to:

(a) Inspect the frame and determine whether it should be sent wirelessly (e.g., addressed to an associated station, a known packet type, etc)
(b) Ethernet Encapsulation
(c) Passing the frame down to the lower-level MAC (a DMA operation from DRAM into the Tx packet buffer + an interprocessor message)
(d) Configuring the MAC Support Core to send the frame.

The channel sensing is happening in the background all the time by the MAC support core. At the moment the code tells the core to send a frame, it is able to start the transmission provided the medium has been idle for a DIFS period before that.

zhimeng wrote:

(2) what is the meaning of the second arrow, the one from the brown chunk to the second blue chunk? I can roughly guess that this is because the upper mac and transmission can work at the same time. Is this correct?

Steps (a)-(c) in my prior response can happen while a frame is currently being transmitted. That arrow just shows that the processing of that Ethernet frame occurred and the packet is ready to send at the next available opportunity (ideally a DIFS interval after the previous transmission).

zhimeng wrote:

(3) then the interval between two blue chunks is the CSMA result, due to current channel station

That interval is, at a minimum, a backoff period + a DIFS interval. If slot 0 is chosen, then the gap is a DIFS interval. Note: this drawing shows the case of back-to-back transmissions that do not require acknowledgements (i.e. multicast frames). The gap between transmissions is larger if there is an ACK reception in between.

Last edited by chunter (2015-Dec-04 10:47:39)

Offline

 

#15 2015-Dec-04 15:39:33

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thanks so much for your help! Your information helps me a lot understanding the protocol.

Offline

 

#16 2015-Dec-04 16:24:02

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: Delay the timing of packet transmission

One thing I forgot to mention above- we took these measurements using an oscilloscope connected to the debug header on the WARP v3 board. Asserting a GPIO from code is fast - just 1 register write. This allows more precise measurements with lower risk of affecting latencies due to the time required for reading timestamps, doing u64 arithmetic, and xil_printf'ing values.

In CPU High you can use the wlan_mac_high_set_debug_gpio(x) and wlan_mac_high_clear_debug_gpio(x) macros to set/clear gpio pins, where x is a mask of pins to set/clear. In the current (v1.3) design 3 pins (pins 12:14; see user guide details) are allocated to software-controlled gpio.

Offline

 

#17 2015-Dec-04 22:16:08

zhimeng
Member
Registered: 2015-Sep-30
Posts: 47

Re: Delay the timing of packet transmission

Thanks. I am trying to use the oscilloscope to observe the waveforms.

Offline

 

Board footer