WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2013-Aug-29 15:27:58

Julian
Member
Registered: 2010-Nov-10
Posts: 85

Queue management for 802.11 reference design

Hi,
  Is there on-line  introduction materials about the queue management for 802.11 reference design?
I have been reading the C code for couple of days, but can not get the idea.

Thank you

Last edited by Julian (2013-Aug-29 16:21:39)

Offline

 

#2 2013-Aug-29 16:43:51

chunter
Administrator
From: Mango Communications
Registered: 2006-Aug-24
Posts: 1212

Re: Queue management for 802.11 reference design

We're still working on much of the documentation (including the queue management) but I'll do my best to give you an overview of how it works in this post.

First, some nomenclature: Each queue element is called a pqueue in Reference Design 0.3 and has since been renamed to a packet_bd in our most recent work. We'll adopt the packet_bd naming convention in all future reference design releases. A (pqueue_list,packet_bd_list) is just a quick way of referring to an arbitrary chain of queue elements.

1) At boot, the code sets up a big doubly-linked list of queue elements and sets them aside as "free."

2) A list of queue elements can be checked out from this free pool with queue_checkout. Once this happens, these queue elements are no longer considered free. Only after the queue_checkin do they re-enter the free pool so they can be checked out again.

3) Checked-out queue elements are filled in with packet contents. A good example of this is the beacon_transmit code, where 1 queue element is checked out from the free pool. Then, a beacon MPDU is created at the location pointed to by that checked-out queue and some metadata is updated.

4) Once a checked-out queue is ready to be transmitted, it needs to be enqueued into an outgoing queue. In the above example for the beacon, the enqueue_after_end command is called with a first argument of 0. In our AP code, this argument is used to specify that the packet enters the queue intended for broadcast traffic. For unicast traffic, the second argument is unique to the station, allowing each station to have its own independent queue of data sent to it.

5) When CPU_LOW indicates that it is ready for a new packet to be sent wirelessly, the AP does a round-robin check of all outgoing queues. If that check finds something in an outgoing queue, ultimately the mpdu_transmit function is called with an argument that is a pointer to a single queue element that needs to be passed down to CPU_LOW for wireless transmission. The function that called the mpdu_tx function, wlan_mac_poll_tx_queue, checks that queue element back into the free pool after it is passed down to CPU_LOW. This allows this queue element to be checked back out in the future for new data packets.

That's the general life cycle of a queue element: it's free -> it's checked out and modified to hold a packet -> it's enqueued into an outgoing Tx queue -> it's dequeued at some point and transmitted wirelessly -> it's checked back in to the free pool for the process to begin anew.

One final thing I want to point out is that the queue's used heavily by the Ethernet DMA receptions such that queue elements are automatically filled in with Ethernet receptions without the CPU_HIGH even needing to get involved. This gives us a massive performance boost over, say, the way the OFDM Reference Design handled Ethernet receptions (which was very susceptible to packet drops with bursty receptions). When Ethernet is set up, a bunch of queue elements are first checked out here. A bunch of Ethernet DMA buffer descriptors are then set up to automatically fill in the memory addresses pointed to by those checked out queue elements. Then, the code just waits around until the DMA automatically fills in those queue elements with received Ethernet frames, at which point they are looped over, encapsulated, and enqueued into the relevant outgoing queues. This means that even while CPU_HIGH is busy doing stuff, the DMA engine can always be filling in buffer descriptors that point to checked-out queue elements. There is no payload copy involved at all. By default, if you have a stock WARP v3 node that has the included DRAM SODIMM installed, the 802.11 Reference Design Sets up 3000 queue elements all in DRAM. 200 of those are checked out and handed off to the Ethernet DMA initialization so it is free to receive up to 200 Ethernet frames in a burst without the code in CPU_HIGH even having to do anything. As receptions occur, the code processes them, enqueues them, and checks out more queue elements to give back to the Ethernet DMA. We've found that keeping 200 floating is more than enough for CPU_HIGH to keep up and get more queue elements checked out and handed to the Ethernet DMA. I think the high water mark from my tests has been around ~30 packets (i.e. 30 Ethernet packets were received by the time my code got around to processing them all as a batch).

Let me know if you have any other questions. It's definitely complicated, but I think it's pretty elegant and extendable for research purposes. For example, the round-robin way that the AP code dequeues things isn't set in stone. You could easily use all the same framework for doing QoS stuff where certain packets have higher priority than others. Likewise, because doubly-linked lists are so nice, you could easily stick packets in the front or middle of a queue and have them jump the line.

Offline

 

#3 2013-Aug-29 16:52:41

Julian
Member
Registered: 2010-Nov-10
Posts: 85

Re: Queue management for 802.11 reference design

Thank you very much, I really appreciate this detailed introduction.

Offline

 

Board footer