Biochemical kinetics concern the concentration of chemical substances in biological systems. In this section we neglect spatial variation in concentrations, effectively assuming chemical reactions occur in environments that are “well-mixed”.
5.1 The law of mass action
We denote the \(i^{th}\) chemical species using the notation \(C_i\). The concentration of molecule type \(C_i\) is denoted \([C_i]\) and represents the number of molecules of \(C_i\) per unit volume.
where \(k_f\) and \(k_b\) are dimensional rate constants.
Example
Suppose A and B react to produce C. Hence \[
A+B\xrightarrow{k} C.
\]
The law of mass action states that the rate of the reaction is
\[
k[A][B].
\]
Using the reaction rates, we write down ordinary differential equations that describe how concentrations of a given molecule will change in time. Hence
\[
\frac{d[C]}{dt}=k[A][B].
\]
Example
Consider the reversible reaction \[
A+B \xrightleftharpoons[k_{-}]{k_{+}} C.
\] Define dependent variables, identify reaction rates and derive ordinary differential equations that describe how concentrations evolved in time.
For a given set of initial conditions, \[
[A](t=0)=[A]_0, \ \ \ [B](t=0)=[B]_0, \ \ \ [C](t=0)=[C]_0,
\] the ODEs can be solved and hence the concentrations of the different molecules described as time evolves.
5.1.1 Example
The previous example have had stochiometric constants all set to unity (i.e. all reactions involved a molecules of one species interaction with one from another). Consider the reaction in which one molecule of species A with \(m\) of species B giving rise to \(n\) molecules of species B and \(p\) molecules of species C, i.e. \[
A+mB \xrightarrow{k_1} nB + pC.
%X \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} Y
\]
The law of mass action says that the rate of the forward reaction is \[
k_1[A][B]^m.
\]
The governing ODEs are \[
\begin{aligned}
\frac{d[A]}{dt}&=-k_1[A][B]^m, \nonumber\\
\frac{d[B]}{dt}&=(n-m)k_1[A][B]^m, \nonumber\\
\frac{d[C]}{dt}&=pk_1[A][B]^m. \nonumber
\end{aligned}
\]
Example
Consider the reversible reaction \[
A+A \xrightleftharpoons[k_{-}]{k_{+}} B.
\] Define dependent variables, identify reaction rates and derive ordinary differential equations that describe how concentrations evolved in time.
The reaction rates are \[
k_+[A]^2
\] and \[
k_- [B].
\] The ODEs are \[
\begin{aligned}
\frac{d[A]}{dt}&=-2k_+[A]^2+2k_- [B] , \nonumber\\
\frac{d[B]}{dt}&=k_+[A]^2-k_- [B],
\end{aligned}
\]
5.2 Enzyme kinetics
Biochemical reactions are often regulated by enzymes (substances that convert a substrate into another substrate). Consider a chemical reaction in which substrate, \(S\), reacts with an enzyme, \(E\), to form a complex, \(C\). Suppose the complex can either undergo the reverse reaction or go on to form a product, \(P\), with the release of the enzyme, i.e. \[
S+E \xrightleftharpoons[k_{-1}]{k_{1}} C \xrightarrow{k_2} P+E .
\]
5.2.1 Deriving the model equations
For notational convenience we let lower case letter denote concentrations, i.e.
Applying the law of mass action the above enzyme kinetic scheme can be described by the system of ODES
\[
\begin{aligned}
\frac{d s}{dt} &= -k_1 se + k_{-1}c, \nonumber \\
\frac{d e}{dt} &= -k_1 se + k_{-1}c +k_2 c, \nonumber\\
\frac{d c}{dt} &= k_1 se - k_{-1}c-k_2c, \nonumber \\
\frac{d p}{dt} &= k_2 c.
\end{aligned}
\tag{5.1}\]
We consider initial conditions such that at \(t=0\) the reaction has not yet started, i.e. there is no product or complex formed \[
s(0)=s_0, \ \ e(0)=e_0, \ \ c(0)=0, \ \ p(0)=0,
\] where \(s_0\) and \(e_0\) represent the initial concentrations of substrate and enzyme, respectively.
5.2.2 Numerical solutions
In Figure 5.1 we plot numerical solutions of Equation 5.1. For this solution we have chosen initial conditions such that \[
\frac{e_0}{s_0}\ll 1.
\] Note that \(c\) and \(e\) evolve rapidly in time close to the initial data whilst those of \(s\) and \(p\) do not.
Figure 5.1: Numerical solutions of the Michaelis Menten model
5.2.3 Reducing the dimensions of the model
Whilst there are four dependent variables in the problem (\(s(t)\), \(c(t)\), \(e(t)\) and \(p(t)\)), the description of the problem can be simplified by noting that the variable \(p(t)\) does not couple back to the other variables, i.e. if we can solve the system for \(s(t)\), \(e(t)\) and \(c(t)\) then \(c(t)\) can be expressed as a function of time and
\[
p(t)=k_2\int c(t) dt + C_1.
\]
Furthermore, there is a conserved quantity in the system. Note that
\[
\frac{d e}{dt}+\frac{d c}{dt}=0,
\]
which reflects the fact that enzyme exists either in free form or bound to the product. Integrating yields
We consider the (often realised) case where the amount of enzyme in the system is small compared with the amount of substrate, i.e. \[
\epsilon \ll 1.
\] The presence of the small parameter \(\epsilon\) allows us to use perturbation theory to calculate approximate solutions to Equation 5.2. However, the problem is singular owing to the \(\epsilon dv/dt\) term.
When posing a solution as an expansion, a necessary question to ask is when the expansion is a valid? On the outer scale we have shown that \[
v_0=\frac{u_0}{u_0+K}.
\]
Note that the expression
\[
v_0=\frac{u_0}{u_0+K}
\]
does not satisfy the initial condition \(v(0)=0\). As \(u_0(0)=1\), we find that at \(\tau=0\)
\[
v_0=\frac{1}{1+K} \neq 0.
\]
Hence the proposed expansion is not valid at least near \(\tau=0\).
5.2.5.3 Asymptotic expansions: the inner solution
This issue can be rectified by proposing a different scaling for time and recalculating a series solution in the new coordinate system.
We proceed by making the change of variable \[
\sigma=\frac{\tau}{\epsilon}.
\]
Note that \(\sigma=1\) corresponds to \(\tau=\epsilon\). Hence the proposed rescaling of time will give rise to what is called the inner solution to the problem (close to \(t=0\)).
To distinguish inner and outer solutions we relabel dependent variables such that
to hold as \(\tau \rightarrow 0\) implies \(A=1\).
5.3 The Brusselator
The Brusselator is an abstract model that can be used to demonstrate oscillations in (bio)-chemical systems. Consider a chemical reaction where five chemical species, A, B, D, X and Y, react according to the scheme \[
\begin{aligned}
A&\xrightarrow{k_{1}} X, \nonumber \\
B+X&\xrightarrow{k_{2}} Y+D, \nonumber\\
2X+Y&\xrightarrow{k_{3}} 3X, \nonumber\\
X&\xrightarrow{k_{4}} E.
\end{aligned}
\]
Assuming that the concentration of A and B (\([A]\) and \([B]\), respectively) are in vast excess (i.e. the amount that A and B get depleted by the reactions is negligible compared with the total amount of A and B present), their concentration are treated as constants. Furthermore, as D and E are products but not reactants (they only appear on the right-hand side of reactions) we do not concern ourselves with their dynamics.
5.3.1 Deriving model equations
To make the steps leading to ODEs obvious, it is useful to rewrite the reaction scheme in the form \[
\begin{aligned}
A&\xrightarrow{k_{1}} X, \nonumber \\
B+X&\xrightarrow{k_{2}} Y+D, \nonumber\\
X+X+Y&\xrightarrow{k_{3}} X+X+X, \nonumber\\
X&\xrightarrow{k_{4}} E.
\end{aligned}
\]
Applying the law of mass action yields that the four reactions occur at rates:
\(k_1[A]\),
\(k_2[B][X]\),
\(k_3[X]^2[Y]\),
\(k_4[X]\).
The total time derivative of \([X]\) is obtained by visiting each X in the reaction scheme once and adding the corresponding reaction rate to the right-hand side of the ODE, i.e.
Hence \((a,b/a)\) is not a saddle. The trace of the Jacobian is \[
\mathrm{tr} A=b-1-a^2.
\]
Hence if \(b>1+a^2\) the trace is positive and, referring to Figure 4.1, the steady state is linearly unstable. Otherwise, the steady state is linearly stable.
In Figure 5.2 we numerically solve Equation 5.4 in the case of oscillatory solutions. Note that when the steady state is unstable, the numerical solution indicates that the system has limit cycle solutions. A valid question to ask is whether one can prove that this is the case.
Code
import numpy as npimport matplotlib.pyplot as pltimport scipyfrom scipy.integrate import odeint# Define model parametersa=1.0b=3.2# Compute RHS of the brusselator modeldef rhs_bruss_model(z,t): rhs=np.zeros_like(z) x=z[0] y=z[1] dx_dt=a-b*x+x**2*y-x dy_dt=b*x-x**2*y rhs[0]=dx_dt rhs[1]=dy_dtreturn rhs# Define time domain and discretiset = np.linspace(0, 50, 1000)# Define ICsinit_cond=[0.5,0.5]# Integrate ODEssol1 = odeint(rhs_bruss_model, init_cond,t)# Plotmresultsx=sol1[:,0]y=sol1[:,1]fig, ax= plt.subplots()ax.plot(t, x, 'b',t,y,'r')ax.legend(['$x$','$y$'],loc='best')ax.set_xlabel('$t$')plt.grid()plt.show()
Figure 5.2: Numerical solutions of the Brusselator model
5.3.5 Applying the Poincaire-Bendixson theorem
Recall that the Poincaire-Bendixson theorem can be used to prove the existence of limit cycle solutions in a case where a bounding set encloses a single unstable steady state. Hence given the case of the Brusselator with \(b>a^2+1\), the identification of a bounding set will allow application of the Poincaire-Bendixson theorem and hence prove the existence of limit cycle solutions.
We begin by defining the nullclines of the system. The \(x\) nullcline is given by
\[
y=\frac{1+b}{x}-\frac{a}{x^2}.
\]
The \(y\) nullclines are given by
\[
y=\frac{b}{x} \ \ \ \textrm{and} \ \ \ x=0.
\]
In the positive quadrant, this nullcline has a single root at \(a/(1+b)\), tends to 0 as \(x\rightarrow \infty\) and has a maximum at \(x=2a/(1+b)\).
Code
import numpy as npimport matplotlib.pyplot as pltimport scipyfrom scipy.integrate import odeint# Discretise x for plotting nullclinesx_vec=np.linspace(0.01,6,100)# COmpute quantities needed for pahse planedef ComputeBrusselatorSol(a,b,x_vec): t = np.linspace(0, 10, 1000)# Define different ICs init_cond1=[0.75,0.75] init_cond2=[4.55,4.15] init_cond3=[2.5,0.5]# steady state ss=[a,b/a]# Integrate ODEs alpha=2.0 sol1 = odeint(rhs_bruss_model, init_cond1,t) sol2 = odeint(rhs_bruss_model, init_cond2,t) sol3 = odeint(rhs_bruss_model, init_cond3,t)# Compute nullclines x_ncline=(1+b)/x_vec-a/x_vec**2 y_ncline_1_a=[0,0] y_ncline_1_b=[0,5] y_ncline_2_x=b/x_vecreturn sol1,sol2,sol3,ss,x_ncline,y_ncline_1_a,y_ncline_1_b,y_ncline_2_x# Compute quantitieda=1.0b=3.2sol1,sol2,sol3,ss,x_ncline,y_ncline_1_a,y_ncline_1_b,y_ncline_2_x=ComputeBrusselatorSol(a,b,x_vec)# PLot resultsfig, ax = plt.subplots(1,2)ax[0].plot(x_vec,x_ncline,'k--')ax[0].plot(y_ncline_1_a,y_ncline_1_b,'k--')ax[0].plot(x_vec,y_ncline_2_x,'r--')ax[0].plot(sol1[:,0],sol1[:,1],sol2[:,0],sol2[:,1],sol3[:,0],sol3[:,1])ax[0].plot(ss[0],ss[1],'k*')ax[0].set_xlabel('$x$')ax[0].set_ylabel('$y$')ax[0].set_xlim([-0.05,6])ax[0].set_ylim([-0.005,6])# Try different parametersa=4.0b=3.2# Recompute quantitiessol1,sol2,sol3,ss,x_ncline,y_ncline_1_a,y_ncline_1_b,y_ncline_2_x=ComputeBrusselatorSol(a,b,x_vec)# PLot resultsax[1].plot(x_vec,x_ncline,'k--')ax[1].plot(y_ncline_1_a,y_ncline_1_b,'k--')ax[1].plot(x_vec,y_ncline_2_x,'r--')ax[1].plot(sol1[:,0],sol1[:,1],sol2[:,0],sol2[:,1],sol3[:,0],sol3[:,1])ax[1].plot(ss[0],ss[1],'k*')ax[1].set_xlabel('$x$')ax[1].set_ylabel('$y$')ax[1].set_xlim([-0.05,6])ax[1].set_ylim([-0.005,6])plt.tight_layout()plt.grid()plt.show()
Figure 5.3: Numerical solution of the Brusselator.
In Figure 5.3 we plot the the oscillatory solution together with the nullclines. Note that the nullclines separate the phase plane into regions of constant sign. Note that the \(x\) nullcline separates the phase plane into regions where \(dx/dt>0\) and \(dx/dt<0\). Similarly, the \(y\) nullcline separates the phase plane into regions where \(dy/dt>0\) and \(dy/dt<0\).
Note that signs for \(dx/dt\) can be determined by considering behaviour of the function \(f\) as one moves away from the nullcline where, by definition, \(f=0\). Consider some point that sits on the x nullcline. While keeping the \(x\) value fixed, increase \(y\) so the point moves vertically in the phase plane. This implies that \(f\) has increased because the only term to change is \(x^2y\). Hence \(f\) increases upon increasing \(y\) and therefore \(dx/dt\) is positive for all points above the \(x\) nullcline. Alternatively, note that in the Jacobian matrix we have calculated that \(\partial f/\partial y\) is positive.
5.3.5.1 A confined set
To define a confined set, we construct the closed loop ABCDEA in phase space (see Figure 5.4) where the points A, B, C, D and E are defined as follows. Let the points A and B be two points on the \(x\) axis with coordinates \[
(\delta,0)
\] and \[
\left(\frac{a}{1+b},0\right).
\] respectively. We choose \(\delta>a\). The outward normal to the line segment AB is \(\mathbf{n}=[0,-1]\). To show that the trajectories point inwards along AB, we compute
Hence trajectories point inwards on the line segment BC.
Consider the line segment CD where D has coordinates \[
\left(k,b\frac{1+b}{a}\right),
\] with \(k>a\)
The normal to this line segment is [0,1]. As CD lies in a region of the phase plane where \(dy/dt<0\),
\[
\mathbf{n}.\left[\frac{dx}{dt},\frac{dy}{dt}\right] = \frac{dy}{dt}<0.
\] Hence trajectories point inwards on the line segment CD.
Let E be a point that sits on the \(x\) nullcline at some position \[
\left(\delta,\frac{\delta(1+b)-a}{\delta^2}\right),
\] such that DE is a straight line with outwardly pointing normal vector \([1,1]\). Along DE
Figure 5.4: Numerical solution of the Brusselator.
Hence at all points on the closed loop ABCDEA trajectories point inwards (see Figure 5.4). Hence ABDCEA is a confined set and the Poincaire-Bendixson theorem states that the system exhibits limit cycle solutions when the steady state is unstable. This is precisely the behaviour that we see numerically in Figure Figure 5.3.