We want to find which sinusoids compose a function, $f$.
To do so, we "project" $f$ onto each candidate sinusoid to find how much of that sinusoid can be used in $f$.
First we'll discuss the inner product (or dot product) for discrete and continuous functions. This operation "projects" one function onto another.
$a = [ a_1, a_2, \ldots, a_n ]$
$b = [ b_1, b_2, \ldots, b_n ]$
$a \cdot b = \langle a, b \rangle = \sum_i a_i b_i$
$(x,y) \cdot e_1 = 1 x + 0 y = x$
$(x,y) \cdot e_2 = 0 x + 1 y = y$
This relies on $|e_1| = |e_2| = 1$.
The complex conjugate of $a + b \; i$ is $a - b \; i$ and is denoted $\overline{a + b \; i}$.
Some things are modified with complex numbers:
For rotation $R$, let $u' = R u$ and $v' = R v$:
Note that the dot product $a \cdot b$ is the same as the matrix product $a^\mathsf{T} b$.
$\begin{array}[t]{rl} R u \cdot R v & = (R u)^\mathsf{T} (R v) \\ & = (u^\mathsf{T} R^\mathsf{T}) (R v) \\ & = u^\mathsf{T} (R^\mathsf{T} R) v \\ & = u^\mathsf{T} v \\ & = u \cdot v \end{array}$.
This works because $R$ is a rotation matrix and $R^\mathsf{T} = R^{-1}$.
Note that $|u| |u| = u \cdot u$.
Therefore, ${v \cdot u \over u \cdot u} u$ is the vector that is the projection of $v$ onto the line of $u$, and is the closest we can get to $v$ using only a multiple of $u$.
$\begin{array}{rl}
\cos \theta & = {\text{adjacent} \over \text{hypotenuse}} \\
& = {v \cdot \frac{u}{|u|} \over |v|}
\end{array}$
So $u \cdot v = |u| |v| \cos\theta$.
We can write
$\begin{array}{rrrrrrrrrrrrr} f = [ & 0.8 & 2.1 & 1.4 & 1.0 & 1.5 & 1.9 & 1.6 & 1.6 & 2.0 & 1.7 & 0.9 & ] \\ g = [ & 0.0 & 1.0 & 1.5 & 1.0 & 0.0 & -1.0 & -1.5 & -1.0 & 0.0 & 1.0 & 1.5 & ] \end{array}$
Then the length of the projection of $f$ onto $g$ is $$f \cdot \frac{g}{|g|} = {f \cdot g \over |g|}$$
and the closest we can approximate $f$ with a multiple of $g$ is $$\begin{array}[t]{rl} (f \cdot \frac{g}{|g|}) {g \over |g|} & = ({f \cdot g \over |g|^2 }) g \\ & = ({f \cdot g \over g \cdot g}) g \end{array}$$
For a sampled function, we compute $${ f \cdot g \over g \cdot g } = { \sum_i f_i g_i \over \sum_i g_i^2 }$$
As the spacing between samples goes to zero, the sampled function becomes continuous and sums become integrals: $${ f \cdot g \over g \cdot g } = { \int f(x) g(x) dx \over \int g(x)^2 dx }$$