Dynamical systems
1. Introduction to dynamical systems
Sections 1.1 and 1.2 were adapted from the following sources:
- Morrissey, David P., “Introduction to Dynamical Systems.” From Math Insight. http://mathinsight.org/intoduction_dynamical_system.
- Nykamp, Duane Q., “Initial dynamical systems exploration.” From Math Insight. http://mathinsight.org/dynamical_systems_exploration.
Alterations and additions were made under the Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License of the source material.
In Dynamical Systems, our main goal is to understand behavior of states in a system, given a rule for how the state evolves. The states are our variables; in fact, we even call them state variables. Anything that one could represent with a number could be considered a state. Some examples of state variables may include the population of a colony, the density of a chemical in a solution, the amount of money in a bank account, the position of a particle, a temperature—anything that can be represented by a number or set of numbers. Given the state variable, a dynamical system needs a rule which defines how the system changes (the “dynamics”). Determining the appropriate rule by which the state variable evolves or changes is often where difficulty lies. This is where the modeling comes into play. In this course we will model changes in different state variables related to speech, including the frequency values of formants and the position of air particles in a sound wave. The rules for these behaviors can be approximated by mass-spring systems (simple harmonic oscillators) and their components, which are fundamental to much of modern science and engineering.
Formal study of dynamical systems involves studying mathematical models which have been handed down from creative experts in fields such as Physics, Chemistry, Biology or Economics. Some systems may be derived from basic principles and tested to show experimental accuracy strong enough to create the technologies we use everyday. Newton’s Laws for example are incredibly accurate under known situations and formulate a well defined dynamical system. For example, problems from Classical Mechanics may be written as a dynamical system where the position and velocity of a particle or even that of a continuous medium are the state variable and Newton’s Laws used as the updating rule. These physical models constitute a side of dynamical systems which may be used as a quantitative tool to analyze the environment around us.
1.1. Example: constant rate of growth
The simplest type of dynamical system describes the evolution of a state variable which changes at a constant rate. As an example you may consider a person’s age. Everyone ages at the same constant rate. Although our age is continuous, our birthday is a discrete event. On our birthday our age makes a jump from one integer value to the next. If we measured our age as any positive number the change would not be noticeable, but it is a bit awkward to say things like “I am approximately 26.997 years old.” Instead we say “tomorrow I will be 27 years old.”
Expressing someone’s age as a discrete dynamical system we need two things: a state variable, and a rule that describes how the state variable changes after each time interval. We may denote the person’s age in years between birthday \(n\) and \(n+1\) as \(a_{n}\). Hence, someone’s age between their 20th birthday and 21st birthday is \(a_{20}\). Thus \(a_{n}\) is the state variable: in this case, a representation of a person’s age.
Each birthday, a person gets 1 year older. Said another way, their next age is equal to their current age plus one. We can write the rule for the evolution of the state variable (age) as:
\[a_{n+1}=a_{n}+1\]
This might be said aloud a few different ways:
- “A of N plus one equals A of N, plus one.”
- “The next value of A equals the current value of A plus one.”
- “The next state is equal to one plus the current state.”
- “The next age equals the current age plus one.”
In a discrete dynamical system like this one, the rule which takes the previous value of the state variable and gives the next value is called an updating function. The right hand side of the above equation is the updating function for a person aging at a constant rate.
So far we have defined what it is that is changing (the state variable) and how it changes (the updating function). In order to complete the dynamical system we need an initial condition, which is the value the state variable begins at. The solution to a dynamical system depends on the system’s initial condition or starting point. In the case of modeling a person’s age we may simply choose:
\[a_{0}=0\]
In other words, an infant is zero until they reach their first birthday and turn 1 year old.
Now we have a complete dynamical system which may be written as:
\[ \begin{equation} \begin{cases} a_{0}=0\\ a_{n+1}=a_{n}+1 \end{cases} \end{equation} \]
To use this dynamical system, we start by using our updating function to discover how the initial condition changes after a single unit of time; in this example, that means discovering how a 0-year-old’s age changes after one birthday. Since the initial age is represented by \(a_{0}\), the next age is represented by \(a_{0+1}\), written more simply as \(a_{1}\). To calculate \(a_{1}\), we insert our initial condition into the update function and solve it using arithmetic (or “plug and chug” as some math teachers like to say).
\[ \begin{align} a_{n+1}&=a_{n}+1\\ a_{1}&=a_{0}+1\\ a_{1}&=0+1\\ a_{1}&=1 \end{align} \]
Subsequent steps work the same way (\(a_{2}=a_{1}+1=1+1=2\), \(a_{3}=a_{2}+1=2+1=3\), etc.).
Next, we would like to write down a solution to the system. We can think of the solution to a discrete dynamical system as a list; in our aging example, the list-solution would be \([0, 1, 2, 3, 4, 5, 6, …]\). However, we can never write down the entire list because it never ends (at least, not in this simple example with immortal people). In this case we quickly computed the first few entries of the list \([0, 1, 2, 3]\). But, the actual solution consists of every non-negative integer from 0 through infinity \([0, 1, 2, 3, 4, 5, …]\), so writing out the whole solution is impossible.
Instead, we can write down the entire solution as a formula for the nth entry of the list. As you may have already figured out, we can deduce that \(a_{n}=n\). (In a pure math class one would prove this by induction, but for the benefit of this course it suffices to find a pattern.) Of course it is no surprise that a person is \(n\) years old for the year following their nth birthday. But, this is just so we can illustrate this simple concept in the framework of a dynamical system.
Similarly, it can be helpful to describe the solution of a dynamical system in terms of its goal: a value or set of values that the system appears to “aim” for. In the aging example, the goal of the system is infinity (because n grows to infinity, and \(a_{n}=n\)). In this course, we will find that most of our systems have a single finite goal (e.g., the trajectory of a formant transition into a vowel) or oscillate between two finite goals (e.g., a simple sine wave).
1.2. Example: the decay of lead in the bloodstream
As lead is a strong poison, small amounts of lead in the body can cause lead poisoning, leading to serious health problems. The body does eliminate lead, such as through urine, but the elimination is slow. Elimination from the bones can take decades, but we’ll model how the lead level in the bloodstream decays, which is a faster process.
We can use a simple dynamical system model where the state variable \(p\) is the concentration of lead in the blood, measured in μg/dL (micrograms per deciliter). If no further sources of lead are introduced, then lead will be slowly eliminated from the body. Let’s imagine that the amount of lead decreases by 11% each week.
Although the lead level changes continuously over time, let’s just model weekly snapshots of the lead concentration. (You could imagine that a doctor is testing a patient’s lead concentrations each week, and we’ll work with just these weekly numbers.) We’ll use a discrete-time dynamical system (like the birthday example above), where the time interval is one week.
Let \(t\) measure time in weeks, and we’ll allow t to take only non-negative integer values. We’ll start at \(t=0\). The value \(t=1\) will represent one week later, \(t=2\) will represent two weeks later, etc. We’ll use \(p_{t}\) to denote the lead level at time \(t\). We’ll denote the lead level at time \(t=0\) (the initial condition) by \(p_{0}\), the lead level after one week by \(p_{1}\), the lead level after two weeks by \(p_{2}\), etc.
Since the lead level decreases by 11% each week, we can write a simple formula for the dynamical rule: the lead level after a week is 11% less than the previous lead level. Therefore, the rule for the first week is \(p_{1}=p_{0}-0.11p_{0}\), the rule for the second week is \(p_{2}=p_{1}-0.11p_{1}\), the rule for the third week is \(p_{3}=p_{2}-0.11p_{2}\), etc. We can summarize all these steps with the following updating function: by saying that if the lead level in week \(t\) is \(p_{t}\), then the lead level the next week (that is, \(p_{t+1}\) in week \(t+1\)) is equal to:
\[p_{t+1}=p_{t}-0.11p_{t}\]
(You may recall from your previous education in algebra that this updating function can be simplified to \(p_{t+1} = 0.89p_{t}\). While the updating function could be written either way, we will use the unsimplified version for two reasons. First, it more explicitly captures the notion that we are updating the previous value by applying a change to it—specifically, we update \(p_{t}\) by subtracting \(0.11p_{t}\) from it. Second, the unsimplified form is the template for the types of systems that we’ll use in future lessons.)
Let’s imagine that we started with a high level of lead in the blood, say 64 μg/dL. Thus our initial condition, \(p_{0}\) is \(64\):
\[p_{0}=64\]
We can use our dynamical system to calculate how much lead will be in the bloodstream when measured a week later (\(p_{1}\)). All we have to do is insert our initial condition into the updating function and do a bit of arithmetic:
\[ \begin{align} p_{t+1}&=p_{t}-0.11p_{t}\\ p_{1}&=p_{0}-0.11p_{0}\\ p_{1}&=64-0.11(64)\\ p_{1}&=56.96 \end{align} \]
To find the amount of lead in the bloodstream after two weeks of decay (\(p_{2}\)), we use the updating function once again, substituting \(p_{1}\) for \(p_{t}\) and \(p_{2}\) for \(p_{t+1}\).
\[ \begin{align} p_{t+1}&=p_{t}-0.11p_{t}\\ p_{2}&=p_{1}-0.11p_{1}\\ p_{2}&=56.96-0.11(56.96)\\ p_{2}&=50.69 \end{align} \]
We could do the same for all the following weeks (and you should take the time to try calculating the next few iterations (time steps) on your own for practice).
1.3. Graphing a dynamical system
The graph below (Figure 1) demonstrates how we can track the change in lead concentration from week to week. The y-axis represents how much lead is in the bloodstream (the state variable \(p\)) from week to week. In our example, the lead concentration starts at 64 μg/dL, so our first measurement should go at a y-value of 64 μg/dL. The x-axis represents how many weeks have passed since the initial measurement. The first week on the x-axis is "week 0" instead of "week 1" because we make our first measurement of lead concentration before any time has passed. (It's the same as how an infant is 0-years-old until they've been alive for 12 months.) So, we plot the initial condition as the point where the y-value is 64 μg/dL and the x-value is 0 weeks.
We've already calculated the next two values of lead concentration: the lead concentration is 56.96 μg/dL after 1 week of decay and 50.69 μg/dL after two weeks of decay. We can plot the week 1 measurement by placing a point where the x-value is 1 and the y-value is 56.96 μg/dL, then plot the week 1 measurement by placing a point where the x-value is 2 and the y-value is 50.69 μg/dL.
These three measurements—for week 0, week 1, and week 2—have already been graphed in Figure 1. You can see each of the next weeks' measurements using the Forward button beneath the graph, or backtrack using the Back button. To see the whole year of measurements one after another, from \(t=0\) to \(t=52\), use the Run Simulation button. After a year of measurements (52 weeks) with 11% decay per week, the amount of lead in the bloodstream has decreased from 64 μg/dL to about 0.1494 μg/dL.
In this dynamical system, the state variable (\(p\)) quickly decreases after the first few updates, but decreases more slowly as time goes on. This is because the change in our state variable is always 11% of its previous value. At the early stages, when the state variable is large, 11% of a large number is still fairly large, so the change is fairly large; but as the state variable becomes smaller, 11% of the state variable also becomes smaller, and so the change is smaller. (This is different from the aging example above, in which the state variable changed by the same constant value on every update.)
If we kept calculating the concentration of lead in the bloodstream every week, we would find that the amount of lead continues to decrease, but it never completely disappears—even if we calculated it for thousands of years. Each subsequent calculation brings the system closer to 0 μg/dL, but the system will never truly reach 0 μg/dL because 11% of a non-zero value is still non-zero. Even so, it is clear enough that this system seems to creep as close as it can toward 0 μg/dL. Therefore, we say that the goal of this system is 0 μg/dL.
(In reality, there is probably a moment when the last molecule of lead disappears and the lead concentration is truly 0 μg/dL. Our simplistic imaginary model doesn’t take into account that lead molecules are not infinitely divisible.)
1.4. Example: Lip aperture in speech
Now that you can calculate and graph a dynamical system for a few iterations, let's take a break from the math to see how dynamical systems apply to speech.
Anything that changes over time can be described as a dynamical system. In speech, practically everything changes over time, including the frequency of vibration of the air particles through which sound travels and the positions of speech articulators like the lips and tongue. We'll focus on the example of the positions of the lips now, but spend some time later thinking about how you might describe different properties of speech as dynamical systems.
In English speech, we use the lips to make sounds like /b/ as in "a bun". Find a mirror and watch your lips as you say "a bun", or ask a friend to say it for you a few times. You should see that the upper and lower lip are slightly apart during the word "a", then come together to make a closure during the /b/. Between when the lips are fully open and fully closed, they're constantly moving closer together or farther apart. A dynamical system for the lips should capture the behavior of the lips as they move.
Let's think about what we would need to have a dynamical system that models one part "a bun": how the lips move from being apart during the "a" of "a bun" to being closed during the /b/. We're not going to calculate anything right now, but we can at least think about how to determine the state variable, initial condition, and updating function.
First, the state variable. There are two lips, but we care less about the exact position of each lip and more about how far apart the lips are at any point in time. Our state variable should therefore be the distance between the lips, which we'll call "lip aperture". Using millimeters (mm) to measure distance, a lip aperture of 120 mm would mean that the upper and lower lips are 120 mm apart. When lip aperture 0 mm, the lips must be closed because there's no space between them.
Now that you know what state variable to measure, you can estimate an initial condition for it by using a ruler to measure the initial lip aperture in the phrase "a bun". Start saying the "a", then freeze (stop all facial movement) and use the ruler to measure the distance from the top of your lower lip to the bottom of your upper lip.
As for the equation of the updating function, that's what we call "an empirical question": you don't know the equation right now, but you could find out with clever application of imaging technology linguists use like video cameras, real-time MRI, x-ray microbeam, or electromagnetic articulography (EMA). To keep things simple, borrow the lead decay example's updating function since it seemed to get close to 0 (which is what we want for modeling a lip closure).
With those components—the lip aperture state variable, its initial condition you measure with a ruler, and an updating function—you can assemble a dynamical system for closing the lips to make a /b/ and calculate how long it takes for lip aperture to decrease from the intial condition to approximately 0 mm (say, when lip aperture gets below 0.1 mm). And that's one great use of a dynamical system!
Of course, that's just the tip of the proverbial iceberg. At the very least, you might want to model other aspects of lip aperture during the phase "a bun", like:
- You don't always start with your lips the same distance apart when you say "a": sometimes they start closer together, and sometimes they start farther apart. How do we make sure that our dynamical system always closes the lips for /b/ no matter what the initial lip aperture is?
- What if you're talking faster or slower? How do we change the dynamical system to model lip aperture changing more quickly or more slowly?
- At some point, the lips have to be released to say the vowel in "bun". What kind of dynamical system can we use to control how far apart the lips get?
The rest of this reading addresses those issues. In section 2, you'll learn about how a dynamical system reaches its goal no matter what its initial condition is. Section 3 explains how you can change a dynamical system to make it go faster or slower (section 3.1) and how to change the system's goal to something other than 0 (section 3.2). As you continue this reading, think about how each section could help solve one of the problems listed above.
2. Goals of a dynamical system
At the end of section 1.3, we said that the goal of our lead decay system was 0 μg/dL. To see what this means, consider how the same system behaves under different initial conditions (Figure 2). Over time, this updating function converges toward zero regardless of whether it starts at 16, 64, 128, or 256. The initial condition doesn't affect where the system ends up.
In this example, we arbitrarily choose to calculate the dynamical system out to 52 weeks, at which point we felt comfortable concluding that our calculations were creeping toward a goal of 0 μg/dL. But we will be using many different dynamical systems in this class, and it would be annoying—not to mention time-consuming—to have to calculate 50 or more iterations for each system just to find its goal. While it’s important to be able to calculate successive values of a dynamical system from some initial condition, It would be convenient if we could find the goal of a system more easily. It turns out that we can determine the goal of a dynamical system just by doing a little algebra on the updating function.
Even though this dynamical system never actually reaches its goal, we know that if it ever did reach its goal, it would stop changing. That is, assuming the goal is 0 μg/dL, if we tried to reduce 0 μg/dL by 0.11%, we would still end up with 0 μg/dL. No change. We can say that a system has reached its goal when there is no change between its current state (the state at time \(t\)) and its next state (the state at time \(t+1\)). When a system has reached its goal, the current state equals the next state, and both the current and next states are equal to the goal.
We can use that premise to calculate the goal for our lead decay system. When the current state equals the next state, that means \(p_{t}=p_{t+1}\). Since \(p_{t}\) and \(p_{t+1}\) are the same, we can get rid of the indices for current time and next time and just call the state \(p\). That turns our original equation
\[p_{t+1}=p_{t}-0.11p_{t}\]
into
\[p=p-0.11p\]
From this point, we do a bit of algebra to find out what the state of the system (\(p\)) is when it stops changing. (To do this, we simplify the right side, then get all the \(p\)s on the left side, then divide both sides by the coefficient of \(p\)).
\[ \begin{align} p&=p-0.11p\\ p&=0.89p\\ 0.11p&=0\\ p&=0 \end{align} \]
Since the unchanging state variable \(p\) is at its goal, by our calculations above, the goal of the system is zero.
Note that in calculating the goal of this system, we didn’t reference the initial condition at all. The goal of a dynamical system and its initial condition are unrelated. A dynamical system with a goal will aim toward that goal no matter what its initial conditions are.
Think back to the lip aperture example where we used the lead decay updating function to approximate lip closure. If the action of closing the lips to make a /b/ has a lip aperture goal of 0 mm, is there any initial condition for lip aperture that wouldn't eventually converge at 0 mm?
3. Making changes to dynamical systems
3.1. Controlling how fast the system reaches its goal
In the example where we calculated the elimination of lead from the bloodstream, we decreased the lead concentration by 11% each week. But if there was medicine that accelerated the process of lead elimination—maybe doubling it to 22%—we would need an updating function with a faster rate of decay. Changing 11% to 22% in our system would turn the updating function into
\[p_{t+1}=p_{t}-0.22p_{t}\]
Figure 3 demonstrates how changing the rate of decay affects the dynamical system. Both the original system (11% decay rate) and the new system (22% decay rate) still have a goal of zero, but the system with the larger decay rate changes faster. It’s easiest to see the difference in the first few weeks. For example, on week 2, the concentration of lead in the bloodstream is above 50 μg/dL with an 11% rate of decay, but below 40 μg/dL with a 22% rate of decay. Altering the rate of decay changes how quickly the system reaches its goal.
Before moving on, consider how this affects the lip aperture dynamical system which used a similar updating function. How could you change the updating function to model a faster speech rate, which would have faster lip closures? How about a slower speech rate?
3.2. Changing the goal
In the dynamical system that models concentration of lead in the bloodstream, the aim of the system is the elimination of lead, represented as 0 μg/dL. But in many dynamical systems, the goal is something other than zero. In those cases, we can control the goal by adding a term to our updating function.
Let’s change our system a little and imagine we wanted to reach a goal of 50. We know from section 2 that if our system ever reached its goal, the state variable at that time would equal the goal value (in this case, \(p_{t}=50\)). And, once at its goal, a system does not change anymore, so the state variable at the next point will also equal 50 (\(p_{t+1}=50\)). If we substitute the goal 50 for \(p_{t}\) and \(p_{t+1}\) in our updating equation, we can calculate what needs to be added to our updating function. We’ll represent the new term as \(C\). That gives us:
\[ \begin{align} p_{t+1}&=p_{t}=goal=50\\\\ goal&=goal-0.11(goal)+C\\ goal-C&=goal-0.11(goal)\\ -C&=goal-0.11(goal)-goal\\ C&=-(50-0.11(50)-50)\\ C&=5.5 \end{align} \]
Therefore, to reach a goal of 50, the new updating function is:
\[p_{t+1}=p_{t}-0.11p_{t}+5.5\]
You can confirm that this equation yields a goal of 50 by calculating the goal like we did in section 2 (making sure to include the new term \(+5.5\)).
When we plot this dynamical system now, no matter what the initial condition is, the calculated values get closer to 50 as time goes on (Figure 4). Once again, the initial condition has nothing to do with the goal of the system. The trajectories with initial conditions below 50 rise up toward 50, and the trajectories with initial conditions above 50 gradually lower to get closer to 50.
Remember that in the lip aperture example from before, we wanted a way to open the lips after they closed. How could adding this \(C\) term help us accomplish that?
4. General dynamical systems formula
Goal-driven dynamical systems usually have a similar form to the lead decay example. We can state the form more generally like this:
\[x_{t+1}=x_{t}-kx_{t}+C\]
In this formula, \(x_{t}\) and \(x_{t+1}\) are the current and next states of a state variable \(x\), \(k\) represents how fast the system reaches its goal, and \(C\) is an additional term that helps to determine the goal of the system. The value for \(k\) is usually between 0 and 1, with higher values meaning bigger changes. The value for \(C\) can be calculated once both \(k\) and the desired goal of the system are known.
When working backwards to find the goal of a dynamical system from its updating function, the algebra works generally as follows:
\[ \begin{align} x_{t+1}&=x_{t}=goal\\\\ goal&=goal-k(goal)+C\\ goal-goal+k(goal)&=C\\ k(goal)&=C\\ goal&=\frac{C}{k} \end{align} \]
This type of dynamical system is simple, but it can be very useful. And, when you couple it with a constant change system like the aging one back in section 1.1, you can model simple harmonic oscillators. Simple harmonic oscillators lie at the core of practically everything in classical physics, which makes them a fundamental part of everyday life. Linguists use simple harmonic oscillators to understand phenomena like sound wave propagation, vocal tract resonance, and vocal fold vibration. Variations on simple harmonic oscillators can also be used as more accurate updating functions for lip aperture and other speech articulator movement.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
5. Exercises
These exercises test your knowledge of how to use dynamical systems. If you understand the parts of the model (state variable, initial condition, updating function) and model goals, and if you can successfully calculate different iterations of a model, then you’re doing great!
Exercise 1
What is \(x_{3}\)?
\[ \begin{equation} \begin{cases} x_{0}=0\\ x_{t+1}=x_{t}+10 \end{cases} \end{equation} \]
Answer
\[ x_{3}=30 \]
Exercise 2
What is \(p_{3}\)?
\[ \begin{equation} \begin{cases} p_{0}=50\\ p_{t+1}=p_{t}+5 \end{cases} \end{equation} \]
Answer
\[ p_{3}=65 \]
Exercise 3
What is \(F_{4}\)? What is the goal of this system?
\[ \begin{equation} \begin{cases} F_{0}=800\\ F_{t+1}=F_{t}-0.5F_{t} \end{cases} \end{equation} \]
Answer
\[ \begin{align} F_{4}=50\\ F_{goal}=0 \end{align} \]Exercise 4
What is \(F_{4}\)? What is the goal of this system?
\[ \begin{cases} F_{0}=800\\ F_{t+1}=F_{t}-0.5F_{t}+1100 \end{cases} \]
Answer
\[ \begin{align} F_{4}&=2112.5\\ F_{goal}&=2200 \end{align} \]
Exercise 5
Given the following dynamical system, find the value of \(C\) that would make the goal of this system 800.
\[ \begin{cases} F_{0}=2200\\ F_{t+1}=F_{t}-0.5F_{t}+C \end{cases} \]
Answer
\[ \begin{align} F_{t+1}&=F_{t}=goal=800\\\\ F_{t+1}&=F_{t}-0.5F_{t}+C\\ goal&=goal-0.5(goal)+C\\ goal-C&=goal-0.5(goal)\\ -C&=goal-0.5(goal)-goal\\ C&=-(800-0.5(800)-800)\\ C&=400 \end{align} \]