Saturday, February 28, 2015

Experience on BladeRF

Receiver:
The challenges needs to realize:
- The power of GPS signal at the receiver is about -160dBW, but the BladeRF sensitivity is about -130dBW. Consequently, we need an active antenna with 30dB or more in gain.
- The USB 2.0 is more stable than 3.0, but the speed is much lower than 3.0. Particularly, laptops have very unstable USB ports, this problem is taken into account in the experience.
- There is no detector to recognize the loss of samples.
 Scenario:
Step 1, Connect the bladeRF with the active antenna in NavSAS Lab
Step 2, Collect the signal and observe the received signal in the Oscilloscope. The following commands is given by http://bladerf.blogspot.it/2014/01/finally-i-can-see-gps-satellites-with.html for collecting signal:

[WARNING] extract_field: Field checksum mismatch
[WARNING] Could not extract VCTCXO trim value
[WARNING] extract_field: Field checksum mismatch
[WARNING] Could not extract FPGA size
bladeRF> load fpga hostedx115.rbf
Loading fpga from hostedx115.rbf...
Done.
bladeRF> print frequency

  RX Frequency: 1000000000Hz
  TX Frequency: 1000000000Hz

bladeRF> set frequency 1575420000

  Set RX frequency: 1575420000Hz
  Set TX frequency: 1575420000Hz

bladeRF> set samplerate 4000000

  Setting RX sample rate - req:   4000000Hz, actual:   4000000Hz
  Setting TX sample rate - req:   4000000Hz, actual:   4000000Hz

bladeRF> set bandwidth 2.5E6

  Set RX bandwidth - req:  2500000Hz actual:  2500000Hz
  Set TX bandwidth - req:  2500000Hz actual:  2500000Hz

bladeRF> set rxvga1 30
bladeRF> set rxvga2 30
bladeRF> rx config format=csv n=40000 file=/temp/AKIEL.csv
bladeRF> rx start
bladeRF> rx config format=csv n=40000 file=/temp/BKIEL.csv
bladeRF> rx start
bladeRF> rx config format=csv n=40000 file=/temp/CKIEL.csv
bladeRF> rx start
bladeRF> rx start
bladeRF> rx start
bladeRF> rx config format=csv n=40000 file=/temp/DKIEL.csv
bladeRF> rx start
bladeRF> rx start
bladeRF> rx start
bladeRF> rx config format=csv n=80000 file=/temp/CKIEL.csv
bladeRF> rx config format=csv n=80000 file=/temp/DKIEL.csv
bladeRF> rx start

bladeRF> rx start
bladeRF>

Step 3, Using Matlab-based-GNSS-RX to acquire and track the signal
Step 4, Increasing the sampling frequency and repeat steps from 1 to 3,

Transmitter:

Friday, January 9, 2015

Implementing PLL/FLL/DLL in GNSS RX

1. Basic scheme (Kaplan 2006)
The given scheme for PLL and DLL in the tracking loop of GNSS RX
In tracking loop filters part, we have presented the common scheme of tracking loop filter for digital receiver. We will compare the PLL/DLL in GNSS RX with the common scheme. The given scheme actually contains 2 blocks (PLL and DLL). In case of no-noise, they work separately. Unfortunately, in reality, 2 blocks must work together. For the sake of simplify, we consider PLL.
Firstly, we have to compute I and Q in EPL channels
I=Ax(n)+noise(n)
Q=Ay(n)+noise(n)
real_error(n)=f(x(n)/y(n))
estimate_error(n)=f(I(n)/Q(n))
DLL affects to the coefficient A. It's clear that A should be maximum value as possible.

Take PLL as an example. We compare PLL and the CTL
1, Discriminator
2, F(z)
3, NCO or D(z)
The updated frequency of NCO is equivalent to sampling frequency Fs=1/Ts. On the other hand, the PLL updated carrier phase every a duration time of integration Td. We will consider the characteristic of NCO in Z-domain:
The output of PLL is often expressed as $\frac{T_d}{2}\frac{d\Phi_k}{dt}$. It's easy to prove that
$$\Phi_{k+1}=\Phi_{k}+T_d/2(\frac{d\Phi_{k+1}}{dt}+\frac{d\Phi_k}{dt})$$.
Therefore:
$$N(z)=\frac{T_d}{2}\frac{z+1}{z-1}$$






Saturday, January 3, 2015

Tracking Loop Filters

1. Introduction to tracking loop filter in GNSS RX
-Tracking loop filter is a well-known concept in control theory. the tracking loop filter in GNSS RX is designed based on its analogous characteristics  to control systems especially tracking stage. To have deeply understanding the tracking loop filter, we have to read them on control system, that would be showed on the reference part of this post. 
- The aim of tracking filter is to estimated the reference parameter as more accurate as possible. In case of noise-free signal, those parameter can be estimated directly. Unfortunately, the incoming signal always includes noise. This leads to the use of loop filter in order to reduce the effect of error. 
- To analyze the characteristic of  a tracking loop filter, we must consider its effect on desired signal and noise separately. In most case, noise is assumed as white Gaussian noise.
2. Tracking loop filter design
We consider 2 kinds of input:
- Step input
- Ramp input
And the noise is assumed as white Gaussian noise
That leads to 2 kinds of filters namely: first-order and second-order filters
a, Signal combination
The input signal $y[k,\xi]$ is combined with the reference signal $y_{ref}[k,\hat{\xi}[k]]$. The local reference signal have the same basic structure to input signal. We try to make a 'copy version' of the input signal but we don't know exactly the parameter $\xi$. Consequently, we have to design a system to estimate that parameter.
b, Discrimination function
The objective of the function is to extract the error $\hat{\xi}[k]-\xi$ from the combined signal $z[k,\xi]$
c, Low-pass loop filter
Unfortunately the input signal includes not only the desired signal but also noise. The filter reduces noise in order to get the more accurate estimate of $\xi$
d, Parameter estimation
The estimated parameter $\hat{\xi}$ must be updated every loop with the difference between input and local signal.
$$\hat{\xi}[k+1]=\hat{\xi}[k]+\epsilon_{\xi}[k]$$
e, Delay
The block characterize the behavior of the estimate will be utilized in next loop
f, Reference signal generator
Generate the signal 

The linearized model can be expressed as following:



$$\hat{\xi}[z]=(\beta\delta_{\xi}[z]+\eta[z])F(z)D(z)$$
$$\delta_{\xi}[z]=\xi[z]-\hat{\xi}[z]$$
$$\Rightarrow\hat{\xi}[z]=\frac{{\beta}F(z)D(z)}{1+{\beta}F(z)D(z)}\xi[z]+\frac{F(z)D(z)}{1+{\beta}F(z)D(z)}\eta[z]$$
$$\Rightarrow\delta_{\xi}[z]=\frac{1}{1+{\beta}F(z)D(z)}\xi[z]-\frac{F(z)D(z)}{1+{\beta}F(z)D(z)}\eta[z]$$
Transfer functions from $\xi[k]$ and $\eta[k]$ to $\hat{\xi[k]}$
$$G_\xi(z)=\frac{{\beta}F(z)D(z)}{1+{\beta}F(z)D(z)}$$
$$G_\eta(z)=\frac{F(z)D(z)}{1+{\beta}F(z)D(z)}$$
Transfer functions from $\xi[k]$ and $\eta[k]$ to $\delta{\xi[k]}$
$$H_\xi(z)=\frac{1}{1+{\beta}F(z)D(z)}$$
$$H_\eta(z)=-\frac{F(z)D(z)}{1+{\beta}F(z)D(z)}$$
It's easy to indicate the function D(z)
$$\hat{\xi}[k+1]=\hat{\xi}[k]+\epsilon_\xi[k]$$
$$\Rightarrow{D(z)}=\frac{1}{z-1}$$
2.1. Noise-free signal
We consider the behavior of tracking filer with 2 kinds of input:
Step input $\xi[k]=Au[k]$ ($\xi(z)=\frac{Az}{z-1}$) and ramp input $\xi[k]=Aku[k]$  ($\xi(z)=\frac{Az}{(z-1)^2}$)
We expect our estimate convergence to the true value. Moreover, the final value theorem states as following expressions:
$$\lim_{k\rightarrow\infty}f[k]=\lim_{s\rightarrow0}sf(s)=\lim_{z\rightarrow1}(z-1)f(z)$$
It's clear that:
For first-order LP, $F(z)=\gamma$
$$\lim_{z\rightarrow1}(z-1)\xi_{step}(z)=0$$
$$\lim_{z\rightarrow1}(z-1)\xi_{ramp}(z)=\frac{A}{\beta\gamma}$$
For second-order LP, $F(z)=\frac{az+b}{z-1}$
$$\lim_{z\rightarrow1}(z-1)\xi_{step}(z)=0$$
$$\lim_{z\rightarrow1}(z-1)\xi_{ramp}(z)=0$$
2.2. Signal plus noise
We consider only variance (jitter) at the output of  the filter:
$$\xi_\eta[k]=\eta[k]*h[k]=\sum_{n=0}^{+\infty}h[n]\eta[k-n]$$
$$E(|\xi_\eta[k]|^2)=E(\sum_{n=0}^{+\infty}h[n]\eta[k-n]\sum_{p=0}^{+\infty}h^*[p]\eta^*[k-p])$$
$$=E(\sum_{n=0}^{+\infty}\sum_{p=0}^{+\infty}h[n]\eta[k-n]h^*[p]\eta^*[k-p])$$
$$=\sum_{n=0}^{+\infty}|h[n]|^2E(|\eta[n-k]|^2)=\sum_{n=0}^{+\infty}|h[n]|\sigma^2_\eta$$
$$=\int_{-0.5}^{0.5}|H(j2{\pi}f)|^2{df}$$
$B_{eq}$ is defined as follows:
$H_{eq}(f_a)=H(0)$ where $|f_a|\leq{B_{eq}}$
The bandwidth controls amount of noise allowed in the filter.
$$\int_{-0.5}^{0.5}|H(f)|^2df=T_s\int_{-F_s/2}^{F_s/2}H(f_a)df_a=T_s2B_{eq}H(0)$$
$$\int_{-0.5}^{0.5}|H(f)|^2df=T_s\int_{-F_s/2}^{F_s/2}H(f_a)df_a=T_s2B_{eq}|H(0)|^2$$
$$B_eq=\frac{\int_{-F_s/2}^{F_s/2}H(f_a)df_a}{2|H(0)|^2}$$
$$=\frac{1}{2|H(0)|^2(j2\pi{T_s})}\int_CH(z)H^*(z^{-1})\frac1zdz$$
$$g(z)=H(z)H^*(z^{-1})\frac1z$$
$$=\frac{1}{2|H(0)|^2(j2\pi{T_s})}\sum_kR_g(z_p)$$
$$=\frac{1}{2|H(0)|^2(j2\pi{T_s})}\sum_k\lim_{z\rightarrow{p_k}}(z-p_k)g(p_k)$$
$p_k$ is a pole lie inside the unitary circle C
$$|H(0)|^2=(H(z)H^*(z^{-1}))|z=1$$
a, First-order filter
$$|H(0)|=\frac{1}{\beta^2}$$
$g(z)$ have 2 poles inside C which are $z=0$ and $z=1-\beta\gamma$,
$$\sum_k\lim_{z\rightarrow{p_k}}(z-p_k)g(p_k)=\frac{\gamma^2}{1-(1-\beta\gamma)^2}$$
$$B_{eq}\approx\frac{\beta\gamma}{4T_s}$$
Finally:
$$\sigma^2_\xi\approx\sigma^2_\eta\frac{\gamma}{2\beta}$$
b, Second-order filter
For the filter, we have
$$|H(0)|=\frac{1}{\beta^2}$$
$$H_\eta(z)=\frac{-az+b}{(z-1)^2+\beta(az-b)}$$
$$g(z)=\frac{H(z)H^*(z^{-1})}{z}$$
If $H(z)$ have two poles namely $\alpha$ and $\alpha^*$, $H^*(z^{-1})$ have two poles $\frac{1}{\alpha}$ and $\frac{1}{\alpha^*}$. It's obvious that only two of them lie inside the unit circle. Suppose that $\alpha$ and its conjugate $\alpha^*$
(It's easy to prove that: $\alpha=1-\frac{{\beta}a}{2}+j\sqrt{1-{\beta}b-(1-\frac{{\beta}a}{2})^2}$)
$$B_{eq}=\frac{\beta^2}{T_s}R\{\frac{(a\alpha-b)(a^*-b^*\alpha)}{2jI(\alpha)(1-|\alpha|^2)(1-\alpha^2)}\}$$
$$\sigma_\delta=2\sigma_{\eta}R\{\frac{(a\alpha-b)(a^*-b^*\alpha)}{2jI(\alpha)(1-|\alpha|^2)(1-\alpha^2)}\}$$
The parameters a, b is chosen according to the analogous filter as follows:
$$\rho\approx\frac{4B_{eq}T_s}{1+4\zeta^2(1+2B_{eq}T_s)}$$
$${\beta}b\approx4\zeta^2\rho$$
$${\beta}a\approx4\zeta^2\rho(1+2\zeta^2\rho)$$
In practice damping factor is chosen: $\zeta=sqrt(2)$
As the formula above, jitter depends only the characteristic of filter

Thursday, November 6, 2014

Principal Components Analysis (PCA)

To understand Principal Components Analysis, take the age and height of trees in a forest as an example, the data is distributed as follow:

It's clear that you don't need 2 dimensions (height and age) to be a representative of a particular tree. If you take an another representative $ha=\frac1{\sqrt{2}}(height+age)$, you then need only 1D to represent trees.
Or you can understand that you want to find the most important characteristics of the trees in this statistic. It may be $ha=\frac1{\sqrt{2}}(height+age)$
Or in mathematic, just answer the simple question: Whether we can reduce the number of dimension of the statistic to keep the most accurate properties of the statistic.
In practical, we have much more dimensions than the above example, in that case, the dimensional reduction is very necessary.
In essential, they are the CHANGE OF BASIS
The old (observed) basis is:
$$B=\begin{bmatrix}b_1\\b_2\\...\\b_m\end{bmatrix}=\begin{bmatrix}1&0&...&0\\0&1&...&0\\...\\0&0&...&1\end{bmatrix}=I$$
We would like to change the basis to another basis by a linear combination:
$$B_P=PB=P$$
The rows of P are a set of new basic vectors.
MATRIX P IS CALLED AS THE PRINCIPAL COMPONENTS OF X
The data is re-expressed as
$$Y=PX$$
That lead questions:
- What is the best way to re-express X?
- What is a good choice of basis P
It's obvious that A and B variable is uncorrelated if and only if AB=0. In practice, A and B is expressed as a set of observed data. Therefore  A and B are uncorrelated if and only if  $A^TB=0$
ANSWER THE QUESTIONS:
The BEST WAY to re-express X is that
$$CY=\begin{bmatrix}y_{11}&0&...&0\\0&y_{22}&...&0\\...\\0&0&...&y_{mm}\end{bmatrix}$$
It means that the re-expressed data is uncorrelated with the data along the new basic vectors
$$C_Y=\frac1mYY^T=\frac1mPXX^TP^T=PC_XP^T$$
$C_X=\frac1mXX^T$ is a symmetric matrix. It can be analyzed as $C_X=EDE^T$ where D is diagonal matrix and $\begin{bmatrix}e_1&e_2&...&e_m\end{bmatrix}$ where the column $e_i$ is eigenvector $e_i$.
$C_X$ is now become $C_Y=(PE)D(PE)^T$ We would like to find the matrix P according to the diagonal matrix C_Y. If we choose $P=E^T$, $C_Y$ will become $C_Y=D$ because E is eigenvector ($EE^T=E^TE=I$).
HOW TO SOLVE PCA
1. Eigenvector decomposition
- The principle components of X are the eigenvectors of $C_X=\frac1nXX^T$
- The $i^{th}$ diagonal value of $C_Y$ is the variance of X along $p_i$
2. Using Single Value Decomposition SVD
If X is an arbitrary nxm matrix and rank of  $X^TX$ is r, I will prove that the principle components of X will be the set of othorgonal eigenvector ${v_1,v_2, ..., v_r}$
Prove
Because $v_i$ is the eigenvector of X associated eigenvalue $\lambda_i$
$$(X^TX)v_i=\lambda_iv_i$$
If we denote $u_i=\frac1\sigma_iXv_i$ where $\sigma_i=\sqrt{\lambda_i}$
It derive to the column $i^{th}$: $Xv_i=\sigma_iu_i$, and in matrix view scope:
$$XV=U\Sigma$$
Where:
$$\Sigma=\begin{bmatrix}\sigma_1&0&...&0&...&0\\0&\sigma_2&...&0&...&0\\...\\0&0&...&\sigma_r&...&0\\0&0&...&0&...0\end{bmatrix}$$
$$V=\begin{bmatrix}v_1&v_2&...&v_m\end{bmatrix}$$
$$U=\begin{bmatrix}u_1&u_2&...&u_n\end{bmatrix}$$
The matrix V and U is added with (m-r) and (n-r) orthonormal vectors respectively. The properties of U and V is:
- The columns of V are orthonormal together (definition) and U are also orthonormal ($u_i^Tu_j=0$
- The $||v_i||=1$ (definition) and $||u_i||=1$
In conclusion, an arbitrary matrix n x m can be analyses as:
$$X=U{\Sigma}V^T$$
It's obvious that
$$X^TX=V{\Sigma}^TU^TU{\Sigma}V^T=VDV^T$$
Therefore, we can compute SVD of $X^TX$ to find the PC of X

Source code to test PCA on Matlab
The input data (X)

Output data (after projection) (Eigen Decomposition method)

Output data (SVD method)


 %Create the samples  
 X=[1:1000;1:1000];    % X is 2 dimensional with 1000 samples  
 X=X+100*randn(2,1000);  % Create some 'noise' around the true value  
 plot(X(1,:),X(2,:),'or'); %Visualize the samples  
 grid on;  
 axis([-2000 2000 -2000 2000])  
 %OBJECTIVE:  
 %FINDING THE PC OF X  
 %PROJECT X ONTO A NEW BASIC (FINDING Y) WHERE IT IS 'BETTER'  
 %1st method (EIGEN VECTOR DECOMPOSITION)  
 N=size(X,2); %Determine the number of samples  
 covX=1/(N-1)*X*X';  
 [PC V]=eig(covX);  
 %Extract diagnonal of matrix as a vector  
 %Sort the variances in decreasing order  
 diagV=diag(V);  
 [tmp,srt]=sort(-1*diagV);  
 V=diagV(srt);  
 PC=PC(:,srt);  
 Y=PC'*X;  
 figure;  
 plot(Y(1,:),Y(2,:),'o'); grid on;  
 axis([-2000 2000 -2000 2000])  
 %2nd method (SINGLE VALUE DECOMPOSITION)  
 mn=mean(X,2);  
 X=X-repmat(mn,1,N); %Remove the mean for each dimension  
 [u,S,PC]=svd(X');  
 S=diag(S);  
 V2=S.*S;  
 Y2=PC'*X;  
 figure;  
 plot(Y2(1,:),Y2(2,:),'o'); grid on;  
 axis([-2000 2000 -2000 2000])  

Reference 

[1] Shlens, Jonathon. "A tutorial on principal component analysis." arXiv preprint arXiv:1404.1100 (2014).

Thursday, October 23, 2014

Derivatives of Matrix to Vector/Scalar


$$y=Hx+n$$
$$y=\begin{bmatrix}y_1\\y_2\\...\\y_n\end{bmatrix}=\begin{bmatrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...& a_{2n}\\...\\a_{n1}&a_{n2}&...& a_{nn}\end{bmatrix}\begin{bmatrix}x_1\\x_2\\...\\x_n\end{bmatrix}+\begin{bmatrix}n_1\\n_2\\...\\n_n \end{bmatrix}$$
$$\min_x((y-Hx)^2)$$

Wednesday, October 22, 2014

WHITE NOISE


White noise is defined to be a stationary random process having a constant spectral density function:
$$S_{wn}(j\omega)=\int_{-\infty}^{\infty}R_X(\tau)e^{-j\omega\tau}d(\tau)=A=const$$
where $$R_X(\tau)=\int_{-\infty}^{\infty}x(t)x(t+\tau)$$

Sunday, October 19, 2014

CW Interference

This post shows the theory of CW interference and Matlab source codes for validating the theory.
- The received satellite signal can be represented as
$$s(t)=A_sC(t)D(t)cos(2\pi(f_c+f_d)t+\Phi_0)$$