<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Functional Principal Components on R Views</title>
    <link>https://rviews.rstudio.com/tags/functional-principal-components/</link>
    <description>Recent content in Functional Principal Components on R Views</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 10 Jun 2021 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://rviews.rstudio.com/tags/functional-principal-components/" rel="self" type="application/rss+xml" />
    
    
    
    
    <item>
      <title>Functional PCA with R</title>
      <link>https://rviews.rstudio.com/2021/06/10/functional-pca-with-r/</link>
      <pubDate>Thu, 10 Jun 2021 00:00:00 +0000</pubDate>
      
      <guid>https://rviews.rstudio.com/2021/06/10/functional-pca-with-r/</guid>
      <description>
        
&lt;script src=&#34;/2021/06/10/functional-pca-with-r/index_files/header-attrs/header-attrs.js&#34;&gt;&lt;/script&gt;


&lt;p&gt;In two previous posts, &lt;a href=&#34;https://rviews.rstudio.com/2021/05/04/functional-data-analysis-in-r/&#34;&gt;Introduction to Functional Data Analysis with R&lt;/a&gt; and &lt;a href=&#34;https://rviews.rstudio.com/2021/05/14/basic-fda-descriptive-statistics-with-r/&#34;&gt;Basic FDA Descriptive Statistics with R&lt;/a&gt;, I began looking into FDA from a beginners perspective. In this post, I would like to continue where I left off and investigate Functional Principal Components Analysis (FPCA), the analog of ordinary Principal Components Analysis in multivariate statistics. I’ll begin with the math, and then show how to compute FPCs with R.&lt;/p&gt;
&lt;p&gt;As I have discussed previously, although the theoretical foundations of FDA depend on some pretty advanced mathematics, it is not necessary to master this math to do basic analyses. The R functions in the various packages insulate the user from most of the underlying theory. Nevertheless, attaining a deep understanding of what the R functions are doing, or looking into any of the background references requires some level of comfort with the notation and fundamental mathematical ideas.&lt;/p&gt;
&lt;p&gt;I will define some of the basic concepts and then provide a high level roadmap of the mathematical argument required to develop FPCA from first principals. It is my hope that if you are a total newcomer to Functional Data Analysis you will find this roadmap useful in apprehending the big picture. This synopsis closely follows the presentation by Kokoszka and Reimherr (Reference 1. below).&lt;/p&gt;
&lt;p&gt;We are working in &lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt;, a separable &lt;a href=&#34;https://en.wikipedia.org/wiki/Hilbert_space#:~:text=A%20Hilbert%20space%20is%20a,of%20calculus%20to%20be%20used.&#34;&gt;Hilbert space&lt;/a&gt; of square integrable random functions where each random function, &lt;span class=&#34;math inline&#34;&gt;\(X(\omega,t)\)&lt;/span&gt;, where &lt;span class=&#34;math inline&#34;&gt;\(\omega \in \Omega\)&lt;/span&gt; the underlying space of probabilistic outcomes, and &lt;span class=&#34;math inline&#34;&gt;\(t \in [0,1]\)&lt;/span&gt;. (After the definitions below, I will suppress the independent variables and in most equations assume &lt;span class=&#34;math inline&#34;&gt;\(EX = 0\)&lt;/span&gt;.)&lt;/p&gt;
&lt;div id=&#34;definitions&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Definitions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;A Hilbert Space &lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt; is an infinite dimensional vector space with an inner product denoted by &lt;span class=&#34;math inline&#34;&gt;\(&amp;lt;.,.&amp;gt;\)&lt;/span&gt;. In our case, the &lt;em&gt;vectors&lt;/em&gt; are functions.&lt;/li&gt;
&lt;li&gt;&lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt; is separable if there exists an orthonormal basis. That is, there is an orthogonal collection of functions &lt;span class=&#34;math inline&#34;&gt;\((e_i)\)&lt;/span&gt; in &lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt; such that &lt;span class=&#34;math inline&#34;&gt;\(&amp;lt;e_i,e_j&amp;gt;\; = 0\)&lt;/span&gt; if &lt;span class=&#34;math inline&#34;&gt;\(i = j\)&lt;/span&gt; and 0 otherwise, and every function in &lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt; can be represented as a linear combination of these functions.&lt;/li&gt;
&lt;li&gt;The inner product of two functions &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; and &lt;span class=&#34;math inline&#34;&gt;\(Y\)&lt;/span&gt; in &lt;span class=&#34;math inline&#34;&gt;\(\mathscr{H}\)&lt;/span&gt; is defined as &lt;span class=&#34;math inline&#34;&gt;\(&amp;lt;X,Y&amp;gt;\; = \int X(\omega,t) Y(\omega,t)dt\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;The norm of &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; is defined in terms of the inner product: &lt;span class=&#34;math inline&#34;&gt;\(\parallel X(\omega) \parallel ^2\; = \int X(\omega, t)^2 dt &amp;lt; \infty\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;&lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; is said to be square integrable if &lt;span class=&#34;math inline&#34;&gt;\(E\parallel X(\omega) \parallel ^2 &amp;lt; \infty\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;https://math.stackexchange.com/questions/1687111/understanding-the-definition-of-the-covariance-operator&#34;&gt;covariance operator&lt;/a&gt; &lt;span class=&#34;math inline&#34;&gt;\(C(y): \mathscr{H} \Rightarrow \mathscr{H}\)&lt;/span&gt; for any square integrable function &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; is given by: &lt;span class=&#34;math inline&#34;&gt;\(C(y) = E[&amp;lt;X - EX,y&amp;gt;(X - EX)]\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id=&#34;the-road-to-functional-principal-components&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;The Road to Functional Principal Components&lt;/h3&gt;
&lt;p&gt;As we have seen, the fundamental idea of Functional Data Analysis is to represent a function &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; by a linear combination of basis elements. In the previous posts we showed how to accomplish this using a basis constructed from more or less arbitrarily selected B-spline vectors. But, is there a an empirical, some would say &lt;em&gt;natural&lt;/em&gt; basis that can be estimated from the data? The answer is yes, and that is what FPCA is all about.&lt;/p&gt;
&lt;p&gt;A good way to start is to look at the distance between a vector &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; and its projection down into the space spanned by some finite, p-dimensional basis &lt;span class=&#34;math inline&#34;&gt;\((u_k)\)&lt;/span&gt; which is expressed in the following equation,&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;math inline&#34;&gt;\(D = E\parallel X - \sum_{k=1}^{p}&amp;lt;X, u_k&amp;gt;u_k\parallel^2\)&lt;/span&gt;             &lt;span class=&#34;math inline&#34;&gt;\((*)\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;This expands out to:&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;math inline&#34;&gt;\(= E [&amp;lt; (X - \sum_{k=1}^{p}&amp;lt;X, u_k&amp;gt;u_k, X - \sum_{k=1}^{p}&amp;lt;X, u_k&amp;gt;u_k)&amp;gt;]\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;and with a little algebra this:&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;math inline&#34;&gt;\(= E\parallel X \parallel^2 - \sum_{k=1}^{p}E&amp;lt;X, u_k&amp;gt;^2\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;It should be clear that we would want to find a basis that makes &lt;span class=&#34;math inline&#34;&gt;\(D\)&lt;/span&gt; as small as possible, and that minimizing &lt;span class=&#34;math inline&#34;&gt;\(D\)&lt;/span&gt; is equivalent to maximizing the term to be subtracted in the line above.&lt;/p&gt;
&lt;p&gt;A little algebra shows that, &lt;span class=&#34;math inline&#34;&gt;\(E&amp;lt;X, u_k&amp;gt;^2 \;=\; &amp;lt;C(u_k),u_k&amp;gt;\)&lt;/span&gt; where &lt;span class=&#34;math inline&#34;&gt;\(C(u_k)\)&lt;/span&gt; is the covariance operator defined above.&lt;/p&gt;
&lt;p&gt;Now, we are almost at our destination. There is a theorem (e.g. Theorem 11.4.1 in reference 1.) that says for any fixed number of basis elements p, the distance D above is minimized if &lt;span class=&#34;math inline&#34;&gt;\(u_j = v_j\)&lt;/span&gt; where the &lt;span class=&#34;math inline&#34;&gt;\(v_j\)&lt;/span&gt; are the eigenfunctions of &lt;span class=&#34;math inline&#34;&gt;\(C(y)\)&lt;/span&gt; with respect to the unit norm. From this it follows that &lt;span class=&#34;math inline&#34;&gt;\(E&amp;lt;X, v_j&amp;gt;^2 \;=\; &amp;lt;C(v_,),v_j&amp;gt;\; =\; &amp;lt;\lambda_j, v_j&amp;gt;\; =\; \lambda_j\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Going back to equation &lt;span class=&#34;math inline&#34;&gt;\((*)\)&lt;/span&gt;, we can expand &lt;span class=&#34;math inline&#34;&gt;\(X\)&lt;/span&gt; in terms of the basis &lt;span class=&#34;math inline&#34;&gt;\((v_j)\)&lt;/span&gt; so &lt;span class=&#34;math inline&#34;&gt;\(D = 0\)&lt;/span&gt; and we have what is called the &lt;a href=&#34;https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem&#34;&gt;Karhunen–Loève&lt;/a&gt; expansion: &lt;span class=&#34;math inline&#34;&gt;\(X = \mu + \sum_{j=1}^{\infty}\xi_jv_j\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class=&#34;math inline&#34;&gt;\(\mu = EX\)&lt;/span&gt;&lt;br /&gt;
&lt;/li&gt;
&lt;li&gt;The deterministic basis functions &lt;span class=&#34;math inline&#34;&gt;\((v_j)\)&lt;/span&gt; are called the &lt;em&gt;functional principal components&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;The &lt;span class=&#34;math inline&#34;&gt;\((v_j)\)&lt;/span&gt; have unit norm and are unique up to their signs. (You can work with &lt;span class=&#34;math inline&#34;&gt;\(v_j\)&lt;/span&gt; or &lt;span class=&#34;math inline&#34;&gt;\(-v_j\)&lt;/span&gt;.)&lt;/li&gt;
&lt;li&gt;The eigenvalues are such that: &lt;span class=&#34;math inline&#34;&gt;\(\lambda_1 &amp;gt; \lambda_2 &amp;gt; . . . \lambda_p\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;The random variables &lt;span class=&#34;math inline&#34;&gt;\(\xi_j =\; &amp;lt;X - \mu,v_j&amp;gt;\)&lt;/span&gt; are called the &lt;em&gt;scores&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;span class=&#34;math inline&#34;&gt;\(E\xi_j = 0\)&lt;/span&gt;, &lt;span class=&#34;math inline&#34;&gt;\(E\xi_j^2 = \lambda_j\)&lt;/span&gt; and &lt;span class=&#34;math inline&#34;&gt;\(E|\xi_i\xi_j| = 0,\; if\; i\;\neq\;j\)&lt;/span&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And finally, with one more line:&lt;br /&gt;
&lt;span class=&#34;math inline&#34;&gt;\(\sum_{j=1}^{\infty}\lambda_j \: = \: \sum_{j=1}^{\infty}E[&amp;lt;X,v_j&amp;gt;^2] = E\sum_{j=1}^{\infty}&amp;lt;X,v_j&amp;gt;^2 \; = \; E\parallel X \parallel^2 \; &amp;lt; \infty\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;we arrive at our destination, the variance decomposition:
&lt;span class=&#34;math inline&#34;&gt;\(E\parallel X - \mu \parallel^2 \;= \;\sum_{j=1}^{\infty}\lambda_j\)&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;lets-calculate&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;Let’s Calculate&lt;/h3&gt;
&lt;p&gt;Now that we have enough math to set the context, let’s calculate. We will use the same simulated Brownian motion data that we used in the previous posts, and also construct the same B-spline basis that we used before and save it in the fda object &lt;code&gt;W.obj&lt;/code&gt;. I won’t repeat the code here.&lt;/p&gt;
&lt;p&gt;The following plot shows &lt;strong&gt;120&lt;/strong&gt; simulated curves, each having &lt;strong&gt;1000&lt;/strong&gt; points scattered over the interval &lt;strong&gt;[0, 100]&lt;/strong&gt;. Each curve has unique observation times over that interval.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;/2021/06/10/functional-pca-with-r/index_files/figure-html/unnamed-chunk-1-1.png&#34; width=&#34;672&#34; /&gt;
For first attempt at calculating functional principal components we’ll use the &lt;code&gt;pca.fd()&lt;/code&gt; function from the &lt;code&gt;fda&lt;/code&gt; package. So set up wise, we are picking up our calculations exactly where we left off in the previous post. As before, the basis representations of these curves are packed into the fda object &lt;code&gt;W.obj&lt;/code&gt;. The function &lt;code&gt;pca.fd()&lt;/code&gt; takes &lt;code&gt;W.obj&lt;/code&gt; as input. It needs the non-orthogonal B-spline basis to seed its computations and the estimate the covariance matrix and the orthogonal eigenvector basis &lt;span class=&#34;math inline&#34;&gt;\(v_j\)&lt;/span&gt;. The &lt;code&gt;nharm = 5&lt;/code&gt; parameter requests computing 5 eigenvalues.&lt;/p&gt;
&lt;p&gt;The method of calculation roughly follows the theory outlined above. It starts with a basis representation of the functions, computes the covariance matrix, and calculates the eigenfunctions.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;fun_pca &amp;lt;- pca.fd(W.obj, nharm = 5)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The object produced by &lt;code&gt;pca.fd()&lt;/code&gt; is fairly complicated. For example, the list &lt;code&gt;fun_pca$harmonics&lt;/code&gt; does not contain the eignevectors themselves, but rather coefficients that enable the eigenvectors to be computed from the original basis. However, because there is a special plot method for &lt;code&gt;plot.pca.fd()&lt;/code&gt; it is easy to plot the eigenvectors.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;plot(fun_pca$harmonics, lwd = 3)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;/2021/06/10/functional-pca-with-r/index_files/figure-html/unnamed-chunk-3-1.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;## [1] &amp;quot;done&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It is also to obtain the eivenvalues &lt;span class=&#34;math inline&#34;&gt;\(\lambda_j\)&lt;/span&gt;,&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;fun_pca$values&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;##  [1] 37.232207  3.724524  1.703604  0.763120  0.547976  0.389431  0.196101
##  [8]  0.163289  0.144052  0.116587  0.089307  0.057999  0.054246  0.050683
## [15]  0.042738  0.035107  0.031283  0.024905  0.019079  0.016428  0.011657
## [22]  0.007392  0.002664&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and, the proportion of the variance explained by each eigenvalue.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;fun_pca$varprop&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;## [1] 0.81965 0.08199 0.03750 0.01680 0.01206&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div id=&#34;a-different-approach&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;A Different Approach&lt;/h3&gt;
&lt;p&gt;So far in this short series of FDA posts, I have been mostly using the &lt;code&gt;fda&lt;/code&gt; package to calculate. In 2003 when it was released, it was ground breaking work. It is still the package that you are most likely to find when doing internet searches, and is the foundation for many subsequent R packages. However, as the &lt;a href=&#34;https://cran.r-project.org/web/views/FunctionalData.html&#34;&gt;CRAN Task View&lt;/a&gt; on Functional Data Analysis indicates, new work in FDA has resulted in several new R packages. The more recent &lt;a href=&#34;https://cran.r-project.org/package=fdapace&#34;&gt;&lt;code&gt;fdapace&lt;/code&gt;&lt;/a&gt; takes a different approach to calculating principal components. The package takes its name from the &lt;strong&gt;(PACE)&lt;/strong&gt; Principal Components by Conditional Expectation algorithm described in the paper by Yao, Müller and Wang (Reference 4. below). The package &lt;a href=&#34;https://cran.r-project.org/web/packages/fdapace/vignettes/fdapaceVig.html&#34;&gt;vignette&lt;/a&gt; is exemplary. It describes the methods of calculation, develops clear examples and provides a list of references to guide your reading about PACE and FDA in general.&lt;/p&gt;
&lt;p&gt;A very notable feature of the PACE algorithm is that it is designed specifically to work with sparse data. The vignette describes the two different methods of calculation that package functions employ for sparse and non-sparse data. In this post, In this post we are not working with sparse data, but hope to do so in the future. See the vignette for examples of FPCA with sparse data.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;library(fdapace)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;fdapace&lt;/code&gt; package requires data for the functions (curves) and associated times be organized in lists. We begin by using the &lt;code&gt;fdapace::CheckData()&lt;/code&gt; function to check the data set up in the tibble &lt;code&gt;df&lt;/code&gt;. (See previous post on descriptive statistics for the details on the data construction.)&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;CheckData(df$Curve,df$Time)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No error message is generated, so we move on th having the &lt;code&gt;FPCA()&lt;/code&gt; function calculate the FPCA outputs including:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;W_fpca &amp;lt;- FPCA(df$Curve,df$Time)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;the eigenvalues:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;W_fpca$lambda&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;## [1] 37.5386  3.2098  1.0210  0.3533&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;the cumulative percentage of variance explained by the eigenvalue&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;W_fpca$cumFVE&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;## [1] 0.8875 0.9634 0.9875 0.9959&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and the scores:&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;head(W_fpca$xiEst)&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;##         [,1]    [,2]     [,3]     [,4]
## [1,]  0.8187  1.1971 -1.77755  0.20087
## [2,] -8.7396  2.4165 -0.33768 -0.10565
## [3,] -2.7517 -0.4879 -0.06747 -0.13953
## [4,]  3.9218  0.9419  0.39098 -0.25449
## [5,] -1.4400  0.7691  2.11549 -0.59947
## [6,]  7.3952  0.5114 -2.16391  0.05199&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All of these are in fairly good agreement with what we computed above. I am, however, a little surprised by the discrepancy in the value of the second eigenvalue. The default plot method for &lt;code&gt;FPCA()&lt;/code&gt; produces a plot indicating the density of the data, a plot of the mean of the functions reconstructed from the eigenfunction expansion, a scree plot of the eigenvalues and a plot of the first three eigenfunctions.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;plot(W_fpca)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;/2021/06/10/functional-pca-with-r/index_files/figure-html/unnamed-chunk-12-1.png&#34; width=&#34;672&#34; /&gt;&lt;/p&gt;
&lt;p&gt;Finally, it has probably already occurred to you that if you know the eigenvalues and scores, the Karhunen–Loève expansion can be used to simulate random functions. It can be shown that for the Wiener process:&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;math inline&#34;&gt;\(v_j(t) = \sqrt2 sin(( j - \frac{1}{2})\pi t)\)&lt;/span&gt; and &lt;span class=&#34;math inline&#34;&gt;\(\lambda_j = \frac{1}{(j - \frac{1}{2})^2\pi^2}\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;This gives us:&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;math inline&#34;&gt;\(W(t) = \sum_{j=1}^{\infty} \frac {\sqrt2}{(j - \frac{1}{2})\pi)} N_j sin(( j - \frac{1}{2})\pi t\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;where the &lt;span class=&#34;math inline&#34;&gt;\(N_j \:are\; iid \;N(0,1)\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;fdapace&lt;/code&gt; function &lt;code&gt;fdapace::Wiener()&lt;/code&gt; uses this information to simulate an alternative, smoothed version of the Brownian motion, Wiener process.&lt;/p&gt;
&lt;pre class=&#34;r&#34;&gt;&lt;code&gt;set.seed(123)
w &amp;lt;- Wiener(n = 1, pts = seq(0,1, length = 100))
t &amp;lt;- 1:100
df_w &amp;lt;- tibble(t, as.vector(w))
ggplot(df_w, aes(x = t, y = w)) + geom_line()&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&#34;/2021/06/10/functional-pca-with-r/index_files/figure-html/unnamed-chunk-13-1.png&#34; width=&#34;576&#34; /&gt;&lt;/p&gt;
&lt;p&gt;In future posts, I hope to continue exploring the &lt;code&gt;fdapace&lt;/code&gt; package, including its ability to work with sparse data.&lt;/p&gt;
&lt;/div&gt;
&lt;div id=&#34;references&#34; class=&#34;section level3&#34;&gt;
&lt;h3&gt;References&lt;/h3&gt;
&lt;p&gt;I found the following references particularly helpful.&lt;/p&gt;
&lt;ol style=&#34;list-style-type: decimal&#34;&gt;
&lt;li&gt;Kokoszka, P. and Reimherr, M. (2017). &lt;a href=&#34;https://www.amazon.com/Introduction-Functional-Analysis-Chapman-Statistical-ebook/dp/B075Z9QCV9/ref=sr_1_1?dchild=1&amp;amp;keywords=Introduction+to+functional+data+analysis&amp;amp;qid=1623276309&amp;amp;sr=8-1&#34;&gt;&lt;em&gt;Introduction to Functional Data Analysis&lt;/em&gt;&lt;/a&gt;. CRC.&lt;/li&gt;
&lt;li&gt;Hsing, T and Eubank, R. (2015). &lt;a href=&#34;https://www.amazon.com/Theoretical-Foundations-Functional-Introduction-Probability/dp/0470016914/ref=sr_1_1?dchild=1&amp;amp;keywords=theoretical+foundations+of+functional+data+analysis&amp;amp;qid=1623276176&amp;amp;sr=8-1&#34;&gt;&lt;em&gt;Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators&lt;/em&gt;&lt;/a&gt; Wiley&lt;/li&gt;
&lt;li&gt;Wang, J., Chiou, J. and Müller, H. (2015). &lt;a href=&#34;https://arxiv.org/pdf/1507.05135.pdf&#34;&gt;&lt;em&gt;Review of Functional Data Analysis&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Yao, F., Müller, H, Wang, J. (2012). &lt;a href=&#34;https://anson.ucdavis.edu/~mueller/jasa03-190final.pdf&#34;&gt;&lt;em&gt;Functional Data Analysis for Sparse Longitudinal Data&lt;/em&gt;&lt;/a&gt; JASA J100, I 470&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;

        &lt;script&gt;window.location.href=&#39;https://rviews.rstudio.com/2021/06/10/functional-pca-with-r/&#39;;&lt;/script&gt;
      </description>
    </item>
    
  </channel>
</rss>
