WARP Project Forums - Wireless Open-Access Research Platform

You are not logged in.

#1 2017-Dec-14 21:13:50

zrcao
Member
From: Vienna, VA
Registered: 2007-Jan-24
Posts: 121

LLR Calculation?

I got a question recently about how LLRs are calculated in WARP Rx. The receiver applies fixed scaling ratios, i.e., 15 for BPSK/QPSK, 18 for 16QAM and 22 for 64QAM. How did you calculated these scaling ratios? Is there any references? Thanks.

Offline

 

#2 2017-Dec-15 09:57:23

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: LLR Calculation?

We found those scaling values empirically, sweeping values vs PER using the design in hardware.

Offline

 

#3 2017-Dec-15 12:01:41

zrcao
Member
From: Vienna, VA
Registered: 2007-Jan-24
Posts: 121

Re: LLR Calculation?

Interesting. I thought the scaling ratio is SNR dependent. So you found out that, empirically, this set works for a range of SNRs?

Offline

 

#4 2017-Dec-15 12:42:32

murphpo
Administrator
From: Mango Communications
Registered: 2006-Jul-03
Posts: 5159

Re: LLR Calculation?

It's been a long time since I ran those tests. My recollection is that I didn't observe much variation with Rx power. It's definitely possible the optimum scaling value varies with SNR. However this is probably complicated by interaction with the AGC (i.e. if AGC makes good gain selections, the magnitude of the complex values at the equalizer output are roughly independent of Rx power).

Offline

 

#5 2017-Dec-15 20:53:13

zrcao
Member
From: Vienna, VA
Registered: 2007-Jan-24
Posts: 121

Re: LLR Calculation?

Ok. If it is not too much trouble, could you please explain the follow up quantization and simplified LLR generation procedure? The followings are what I see from the 802.11 reference design.

1. The quantization maps scaled soft values (SV) larger than 1024 to 7, smaller than -1024 to -8, and maps SV in between [-1024, 1024]  to [-8, 7] proportionally. Value 7 is most likely bit 1 and value -8 is most likely 0. Note that the distribution is not even among 0 and 1. -7 is less confident for bit 0 than +7 for bit 1.

2. Next, SV is negated. The data type doesn't change and keeps at Fix_4_0. Also, the negate block saturates when overflow. The net effect is that the range of SV shrinks from [-8, 7] to [-7, 7], with value -7 for bit 1 and 7 for bit 0. It is OK to collapse both SV 7 and SV 8 to a single SV 7, as both are highly confident.

3. The SV is then reinterpreted from Fix_4_0 to UFix_4_0 for concatenation, and re-interpreted back to Fix_4_0 after de-interleaver.

4. But, right in front of the viterbi decoder blackbox, the SV is again re-interpreated from Fix_4_0 into UFix_4_0.

The last two steps are confusing. The viterbi decoder usually expects unsigned soft values. So step 3 is unnecessary. In general, for UFix_4_0, 15 is most confident for 1 and 0 is the most confident for 0. So we need to map SV from [-7, 7] to [0 15]. But how could reinterpretation achieve this mapping? For example, -7 in Fix_4_0 is 1001, after reinterpretation, 1001 in UFix_4_0 is 9. But for -1, it is 1111 in Fix_4_0 and 15 in UFix_4_0.

What format in terms of confidence mapping does the viterbi decoder blackbox expect for the LLR inputs?

Offline

 

Board footer