Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Finite differences

Another method of solving boundary-value problems (and also partial differential equations, as we’ll see later) involves finite differences, which are numerical approximations to exact derivatives.

Recall that the exact derivative of a function f(x)f(x) at some point xx is defined as:

f(x)=dfdx(x)=limΔx0f(x+Δx)f(x)Δxf^{\prime}(x) = \frac{df}{dx}(x) = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) - f(x)}{\Delta x}

So, we can approximate this derivative using a finite difference (rather than an infinitesimal difference as in the exact derivative):

f(x)f(x+Δx)f(x)Δxf^{\prime}(x) \approx \frac{f(x+\Delta x) - f(x)}{\Delta x}

which involves some error. This is a forward difference for approximating the first derivative. We can also approximate the first derivative using a backward difference:

f(x)f(x)f(xΔx)Δxf^{\prime}(x) \approx \frac{f(x) - f(x - \Delta x)}{\Delta x}

To understand the error involved in these differences, we can use Taylor’s theorem to obtain Taylor series expansions:

f(x+Δx)=f(x)+Δxf(x)+Δx212!f(x)+f(x+Δx)f(x)Δx=f(x)+O(Δx)f(xΔx)=f(x)Δxf(x)+Δx212!f(x)+f(x)f(xΔx)Δx=f(x)+O(Δx)\begin{align} f(x + \Delta x) &= f(x) + \Delta x \, f^{\prime}(x) + \Delta x^2 \frac{1}{2!} f^{\prime\prime}(x) + \cdots \\ \rightarrow \frac{f(x + \Delta x) - f(x)}{\Delta x} &= f^{\prime}(x) + \mathcal{O}\left( \Delta x \right) \\ f(x - \Delta x) &= f(x) - \Delta x \, f^{\prime}(x) + \Delta x^2 \frac{1}{2!} f^{\prime\prime}(x) + \cdots \\ \rightarrow \frac{f(x) - f(x - \Delta x)}{\Delta x} &= f^{\prime}(x) + \mathcal{O}\left( \Delta x \right) \\ \end{align}

where the O()\mathcal{O}() notation stands for “order of magnitude of”. So, we can see that each of these approximations is first-order accurate.

# import libraries for numerical functions and plotting
import numpy as np
import matplotlib.pyplot as plt
# these lines are only for helping improve the display
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 300
plt.rcParams['savefig.dpi'] = 300

Second-order finite differences

We can obtain higher-order approximations for the first derivative, and an approximations for the second derivative, by combining these Taylor series expansions:

f(x+Δx)=f(x)+Δxf(x)+Δx212!f(x)+O(Δx3)f(xΔx)=f(x)Δxf(x)+Δx212!f(x)+O(Δx3)\begin{align} f(x + \Delta x) &= f(x) + \Delta x \, f^{\prime}(x) + \Delta x^2 \frac{1}{2!} f^{\prime\prime}(x) + \mathcal{O}\left( \Delta x^3 \right) \\ f(x - \Delta x) &= f(x) - \Delta x \, f^{\prime}(x) + \Delta x^2 \frac{1}{2!} f^{\prime\prime}(x) + \mathcal{O}\left( \Delta x^3 \right) \end{align}

Subtracting the Taylor series for f(x+Δx)f(x+\Delta x) by that for f(xΔx)f(x-\Delta x) gives:

f(x+Δx)f(xΔx)=2Δxf(x)+O(Δx3)f(x)=f(x+Δx)f(xΔx)2Δx+O(Δx2)\begin{align} f(x + \Delta x) - f(x - \Delta x) &= 2 \Delta x \, f^{\prime}(x) + \mathcal{O}\left( \Delta x^3 \right) \\ f^{\prime}(x) &= \frac{f(x + \Delta x) - f(x - \Delta x)}{2 \Delta x} + \mathcal{O}\left( \Delta x^2 \right) \end{align}

which is a second-order accurate approximation for the first derivative.

Adding the Taylor series for f(x+Δx)f(x+\Delta x) to that for f(xΔx)f(x-\Delta x) gives:

f(x+Δx)+f(xΔx)=2f(x)+Δx2f(x)+O(Δx3)f(x)=f(x+Δx)2f(x)+f(xΔx)Δx2+O(Δx2)\begin{align} f(x + \Delta x) + f(x - \Delta x) &= 2 f(x) + \Delta x^2 f^{\prime\prime}(x) + \mathcal{O}\left( \Delta x^3 \right) \\ f^{\prime\prime}(x) &= \frac{f(x + \Delta x) - 2 f(x) + f(x - \Delta x)}{\Delta x^2} + \mathcal{O}\left( \Delta x^2 \right) \end{align}

which is a second-order accurate approximation for the second derivative.

Solving ODEs with finite differences

We can use finite differences to solve ODEs by substituting them for exact derivatives, and then applying the equation at discrete locations in the domain. This gives us a system of simultaneous equations to solve.

For example, let’s consider the ODE

y+xyxy=2x  ,y^{\prime\prime} + x y^{\prime} - x y = 2 x \;,

with the boundary conditions y(0)=1y(0) = 1 and y(2)=8y(2) = 8.

First, we discretize the continuous domain: divide it into a number of discrete segments. For now, let’s choose Δx=0.5\Delta x = 0.5, which creates four segments and thus five points: x1=0,x2=0.5,x3=1.0,x4=1.5,x5=2.0x_1 = 0, x_2 = 0.5, x_3 = 1.0, x_4 = 1.5, x_5 = 2.0.

Our goal is then to find approximate values of y(x)y(x) at these points: y1y_1 through y5y_5. So, we have five unknowns, and need five equations to solve for them. We can use the ODE to provide these equations, by replacing the derivatives with finite differences, and applying the equation at particular discrete locations.

Recall that y(x)y(x) is a function just like f(x)f(x), and so we can apply the above finite difference equations to y(x)y(x) and y(x+Δx)y(x+\Delta x). Now that we have points, or nodes, at locations separated by Δx\Delta x, we can consider a point xix_i where y(xi)=yiy(x_i) = y_i, y(xi+Δx)=y(xi+1)=yi+1y(x_i + \Delta x) = y(x_{i+1}) = y_{i+1}, and y(xiΔx)=y(xi1)=yi1y(x_i - \Delta x) = y(x_{i-1}) = y_{i-1}.

To do this, we’ll follow a few steps:

1.) Replace exact derivatives in the original ODE with finite differences, and apply the equation at a particular location (xi,yi)(x_i, y_i).

For our example, this gives:

yi+12yi+yi1Δx2+xi(yi+1yi12Δx)xiyi=2xi\frac{y_{i+1} - 2y_i + y_{i-1}}{\Delta x^2} + x_i \left( \frac{y_{i+1} - y_{i-1}}{2 \Delta x}\right) - x_i y_i = 2 x_i

which applies at location (xi,yi)(x_i, y_i).

2.) Next, rearrange the equation into a recursion formula:

yi1(1xiΔx2)+yi(2Δx2xi)+yi+1(1+xiΔx2)=2xiΔx2y_{i-1} \left(1 - x_i \frac{\Delta x}{2}\right) + y_i \left( -2 -\Delta x^2 x_i \right) + y_{i+1} \left(1 + x_i \frac{\Delta x}{2}\right) = 2 x_i \Delta x^2

We can use this equation to get an equation for each of the interior points in the domain.

For the first and last points—the boundary points—we already have equations, given by the boundary conditions.

3.) Set up system of linear equations

Applying the recursion formula to the interior points, and the boundary conditions for the boundary points, we can get a system of simultaneous linear equations:

y1=1y1(0.875)+y2(2.125)+y3(1.125)=0.25y2(0.75)+y3(2.25)+y4(1.25)=0.5y3(0.625)+y4(2.375)+y5(1.375)=0.75y5=8\begin{align} y_1 &= 1 \\ y_1 (0.875) + y_2 (-2.125) + y_3 (1.125) &= 0.25 \\ y_2 (0.75) + y_3 (-2.25) + y_4 (1.25) &= 0.5 \\ y_3 (0.625) + y_4 (-2.375) + y_5 (1.375) &= 0.75 \\ y_5 &= 8 \end{align}

This is a system of five equations and five unknowns, which we can solve! But, solving using substitution would be painful, so let’s represent this system of equations using a matrix and vectors:

[100000.8752.1251.1250000.752.251.250000.6252.3751.37500001][y1y2y3y4y5]=[10.250.50.758]\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0.875 & -2.125 & 1.125 & 0 & 0 \\ 0 & 0.75 & -2.25 & 1.25 & 0 \\ 0 & 0 & 0.625 & -2.375 & 1.375 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \end{bmatrix} = \begin{bmatrix} 1 \\ 0.25 \\ 0.5 \\ 0.75 \\ 8 \end{bmatrix}

or, more compactly, Ay=bA \mathbf{y} = \mathbf{b}.

4.) Solve the linear system of equations

The final step is just to solve. We can do this in Matlab with y = A \ b. (This is equivalent to y = inv(A)*b, but faster.)

A = np.array([
    [1.0, 0, 0, 0, 0],
    [0.875, -2.125, 1.125, 0, 0],
    [0, 0.75, -2.25, 1.25, 0],
    [0, 0, 0.625, -2.375, 1.375],
    [0, 0, 0, 0, 1]
    ])

b = np.array([1.0, 0.25, 0.5, 0.75, 8.0])

x = np.linspace(0, 2, 5)
y = np.linalg.solve(A, b)
plt.plot(x, y, 'o-')
plt.grid()
plt.show()
Loading...

General implementation

Of course, all of this will be easier if we implement in Python in a general way. We’ll use a for loop to populate the coefficient matrix AA and right-hand-side vector b\mathbf{b}:

dx = 0.5
x_vals = np.arange(0, 2.01, dx)
A = np.zeros((len(x_vals), len(x_vals)))
b = np.zeros_like(x_vals)

for idx, x in enumerate(x_vals):
    if idx == 0:
        A[0,0] = 1
        b[0] = 1
    elif idx == len(x_vals) - 1:
        A[-1,-1] = 1
        b[-1] = 8
    else:
        A[idx, idx-1] = 1 - x*dx/2
        A[idx, idx] = -2 - x*dx**2
        A[idx, idx+1] = 1 + x*dx/2
        b[idx] = 2 * x * dx**2
y_vals = np.linalg.solve(A, b)
plt.plot(x_vals, y_vals, 'o-')
plt.grid()
plt.show()
Loading...

This looks good, but we can get a more-accurate solution by reducing our step size Δx\Delta x:

dx = 0.001
x_vals = np.arange(0, 2.01, dx)
A = np.zeros((len(x_vals), len(x_vals)))
b = np.zeros_like(x_vals)

for idx, x in enumerate(x_vals):
    if idx == 0:
        A[0,0] = 1
        b[0] = 1
    elif idx == len(x_vals) - 1:
        A[-1,-1] = 1
        b[-1] = 8
    else:
        A[idx, idx-1] = 1 - x*dx/2
        A[idx, idx] = -2 - x*dx**2
        A[idx, idx+1] = 1 + x*dx/2
        b[idx] = 2 * x * dx**2
y_vals = np.linalg.solve(A, b)
plt.plot(x_vals, y_vals)
plt.grid()
plt.show()
Loading...

Boundary conditions

We will encounter four main kinds of boundary conditions. Consider the ODE y+y=0y^{\prime\prime} + y = 0, on the domain 0xL0 \leq x \leq L.

  • First type, or Dirichlet, boundary conditions specify fixed values of yy at the boundaries: y(0)=ay(0) = a and y(L)=by(L) = b.

  • Second type, or Neumann, boundary conditions specify values of the derivative at the boundaries: y(0)=ay^{\prime}(0) = a and y(L)=by^{\prime}(L) = b.

  • Third type, or Robin, boundary conditions specify a linear combination of the function value and its derivative at the boundaries: ay(0)+by(0)=g(0)a \, y(0) + b \, y^{\prime}(0) = g(0) and ay(L)+by(L)=g(L)a \, y(L) + b \, y^{\prime}(L) = g(L), where g(x)g(x) is some function.

  • Mixed boundary conditions, which combine any of these three at the different boundaries. For example, we could have y(0)=ay(0) = a and y(L)=by^{\prime}(L) = b.

Whichever type of boundary condition we are dealing with, the goal will be to construct an equation representing the boundary condition to incorporate in our system of equations.

If we have a fixed value boundary condition, such as y(0)=ay(0) = a, then this equation is straightforward:

y1=ay_1 = a

where y1y_1 is the first point in the grid of points, corresponding to x1=0x_1 = 0. (We saw this in the example above.) In Python, we can implement this equation with

A[0,0] = 1
b[0] = a

If we have a fixed derivative boundary condition, such as y(0)=0y^{\prime}(0) = 0, then we need to use a finite difference to represent the derivative. When the boundary condition is at the starting location, x=0x=0, the easiest way to do this is with a forward difference:

y(0)y2y1Δx=0y1+y2=0\begin{align} y^{\prime}(0) \approx \frac{y_2 - y_1}{\Delta x} &= 0 \\ -y_1 + y_2 &= 0 \end{align}

We can implement this in Python with

A[0,0] = -1
A[0,1] = 1
b[0] = 0

When we have this sort of derivative boundary condition at the right side of the domain, at x=Lx=L, then we can use a backward difference to represent the derivative:

y(L)ynyn1Δx=0yn1+yn=0\begin{align} y^{\prime}(L) \approx \frac{y_n - y_{n_1}}{\Delta x} &= 0 \\ -y_{n-1} + y_n &= 0 \end{align}

where yny_n is the final point (xn=Lx_n = L) and yn1y_{n-1} is the second-to-last point (xn1=LΔxx_{n-1} = L - \Delta x). We can implement this with

A[-1,-2] = -1
A[-1,-1] = 1
b[-1] = 0

If we have a linear combination of a fixed value and fixed derivative, like ay(0)+by(0)=ca \, y(0) + b \, y^{\prime}(0) = c, then we can combine the above approaches using a forward difference:

ay(0)+by(0)ay1+by2y1Δx=c(aΔxb)y1+by2=cΔx\begin{align} a y(0) + b y^{\prime}(0) \approx a y_1 + b \frac{y_2 - y_1}{\Delta x} &= c \\ (a \Delta x - b) y_1 + b y_2 &= c \Delta x \end{align}

and in Python:

A[0,0] = a*dx - b
A[0,1](1,2) = b
b[0] = c * dx

Using central differences for derivative BCs

When a boundary condition involves a derivative, we can use a central difference to approximate the first derivative; this is more accurate than a forward or backward difference.

Consider the formula for a central difference at x=0x=0, applied for the boundary condition y(0)=0y^{\prime}(0) = 0:

y(0)y2y02Δx=0y0=y2\begin{align} y^{\prime}(0) \approx \frac{y_2 - y_0}{2 \Delta x} &= 0 \\ y_0 &= y_2 \end{align}

where y0y_0 is an imaginary, or ghost, node outside the domain. We can’t actually keep this point in our implementation, because it isn’t a real point.

We still need an equation for the point at the boundary, y1y_1. To get this, we’ll apply the regular recursion formula, normally used at interior points:

ayi1+byi+cyi+1=f(xi)ay0+by1+cy2=f(x1)  ,\begin{align} a y_{i-1} + b y_i + c y_{i+1} = f(x_i) \\ a y_0 + b y_1 + c y_2 = f(x_1) \;, \end{align}

where aa, bb, cc, and f(x)f(x) depend on the problem. Normally we wouldn’t use this at the boundary node, y1y_1, because it references a point outside the domain to the left—but we have an equation for that! From above, based on the boundary condition, we have y0=y2y_0 = y_2. If we incorporate that into the recursion formula, we can eliminate the ghost node y0y_0:

ay2+by1+cy2=f(x1)by1+(a+c)y2=f(x1)\begin{align} a y_2 + b y_1 + c y_2 &= f(x_1) \\ b y_1 + (a + c) y_2 &= f(x_1) \, \end{align}

which is the equation we can actually use at the boundary point. In Python, this looks like

A[0,0] = b
A[0,1] = a + c
b[0] = f(x[0])

Example: nonlinear BVP

So far we’ve seen how to handle a linear boundary value problem, but what if we have a nonlinear BVP? This is going to be trickier, because our work so far relies on using linear algebra to solve the system of (linear) equations.

For example, consider the 2nd-order ODE

y=3y+x2+100y2y^{\prime\prime} = 3y + x^2 + 100 y^2

with the boundary conditions y(0)=y(1)=0y(0) = y(1) = 0. This is nonlinear, due to the y3y^3 term on the right-hand side.

To solve this, let’s first convert it into a discrete form, by replacing the second derivative with a finite difference and any xx/yy present with xix_i and yiy_i. We’ll also move any constants (i.e., terms that don’t contain yiy_i) and the nonlinear term to the right-hand side:

yi12yi+yi+1Δx23yi=xi2+100yi2\frac{y_{i-1} - 2y_i + y_{i+1}}{\Delta x^2} - 3y_i = x_i^2 + 100 y_i^2

where the boundary conditions are now y1=0y_1 = 0 and yn=0y_n = 0, with nn as the number of grid points. We can rearrange and simplify into our recursion formula:

yi1+yi(23Δx2)+yi+1=xi2Δx2+100Δx2yi2y_{i-1} + y_i \left( -2 - 3 \Delta x^2 \right) + y_{i+1} = x_i^2 \Delta x^2 + 100 \Delta x^2 y_i^2

The question is: how do we solve this now? The nonlinear term involving yi3y_i^3 on the right-hand side complicates things, but we know how to set up and solve this without the nonlinear term. We can use an approach known as successive iteration:

  1. Solve the ODE without the nonlinear term to get an initial “guess” to the solution for yy.

  2. Then, incorporate that guess solution in the nonlinear term on the right-hand side, treating it as a constant. We can call this yoldy_{\text{old}}. Then, solve the full system for a new yy solution.

  3. Check whether the new yy matches yoldy_{\text{old}} with some tolerance. For example, check whether max(yyold)<\max\left(\left| y - y_{\text{old}} \right| \right) < some tolerance, such as 10-6. If this is true, then we can consider the solution converged. If it is not true, then set yold=yy_{\text{old}} = y, and repeat the process starting at step 2.

Let’s implement that process in Python:

dx = 0.01
x_vals = np.arange(0, 1.001, dx)

A = np.zeros((len(x_vals), len(x_vals)))
b = np.zeros(len(x_vals))

# First, solve the problem without the nonlinear term:
for idx, x in enumerate(x_vals):
    if idx == 0:
        # x = 0 boundary condition
        A[0,0] = 1
        b[0] = 0
    elif idx == len(x_vals) - 1:
        # x = L boundary condition
        A[-1,-1] = 1
        b[-1] = 0
    else:
        # interior nodes, use recursion formula
        A[idx, idx-1] = 1
        A[idx,idx] = -2 - 3*dx**2
        A[idx, idx+1] = 1
        b[idx] = x**2 * dx**2

# get solution without nonlinear term
y_vals = np.linalg.solve(A, b)

plt.plot(x_vals, y_vals, '--', label='Initial solution')

# Now, set up iterative process to solve while 
# incorporating nonlinear terms
iters = 1
y_old = np.zeros_like(x_vals)

while np.max(np.abs(y_vals - y_old)) > 1e-6:
    y_old[:] = y_vals[:]
    # A matrix is not changed, but the b vector does
    for idx, x in enumerate(x_vals):
        if idx > 0 and idx < len(x_vals)-1:
            b[idx] = x**2 * dx**2 + 100*(dx**2)*y_old[idx]**2
    
    y_vals = np.linalg.solve(A, b)
    iters += 1

print(f'Number of iterations: {iters}')
plt.plot(x_vals, y_vals, label='Final solution')
plt.legend()
plt.grid()
plt.show()
Number of iterations: 16
Loading...

Another option is just to set our “guess” for the yy solution to be zero, rather than solve the problem in two steps:

dx = 0.01
x_vals = np.arange(0, 1.001, dx)

A = np.zeros((len(x_vals), len(x_vals)))
b = np.zeros_like(x_vals)

# Set up the coefficient matrix, which does not change
for idx, x in enumerate(x_vals):
    if idx == 0:
        # x = 0 boundary condition
        A[0,0] = 1
        b[0] = 0
    elif idx == len(x_vals) - 1:
        # x = L boundary condition
        A[-1,-1] = 1
        b[-1] = 0
    else:
        # interior nodes, use recursion formula
        A[idx, idx-1] = 1
        A[idx,idx] = -2 - 3*dx**2
        A[idx, idx+1] = 1
        b[idx] = x**2 * dx**2

# just use zeros as our initial guess for the solution
y_vals = np.zeros_like(x_vals)
             
# Successive iteration
iters = 1
# setting this to some random values, just to enter the while loop
y_old = 100 * np.random.rand(len(x_vals))
while np.max(np.abs(y_vals - y_old)) > 1e-6:
    y_old[:] = y_vals[:]
    # A matrix is not changed, but the b vector does
    for idx, x in enumerate(x_vals):
        if idx > 0 and idx < len(x_vals)-1:
            b[idx] = x**2 * dx**2 + 100*(dx**2)*y_old[idx]**2
    
    y_vals = np.linalg.solve(A, b)
    iters += 1

print(f'Number of iterations: {iters}')
Number of iterations: 17

This made our process take slightly more iterations, because the initial guess was slightly further away from the final solution. For other problems, having a bad initial guess could make the process take much longer, so coming up with a good initial guess may be important.

Example: heat transfer through a fin

Let’s now consider a more complicated example: heat transfer through an extended surface (a fin).

figure-md - Unknown Directive
<img src="../../images/fin.png" alt="Heat transfer fin" class="bg-primary mb-1" width="400px">

Geometry of a heat transfer fin

In this situation, we have the temperature of the body TbT_b, the temperature of the ambient fluid TT_{\infty}; the length LL, width ww, and thickness tt of the fin; the thermal conductivity of the fin material kk; and convection heat transfer coefficient hh.

The boundary conditions can be defined in different ways, but generally we can say that the temperature of the fin at the wall is the same as the body temperature, and that the fin is insulated at the tip. This gives us

T(x=0)=Tbq(x=L)=0dTdx(x=0)=0\begin{align} T(x=0) &= T_b \\ q(x=L) = 0 \rightarrow \frac{dT}{dx} (x=0) &= 0 \end{align}

Our goal is to solve for the temperature distribution T(x)T(x). To do this, we need to set up a governing differential equation. Let’s do a control volume analysis of heat transfer through the fin:

figure-md - Unknown Directive
<img src="../../images/fin-control-volume.png" alt="Control volume for heat transfer fin" class="bg-primary mb-1" width="300px">

Control volume for heat transfer through the fin

Given a particular volumetric slice of the fin, we can define the heat transfer rates of conduction through the fin and convection from the fin to the air:

qconv=hP(TT)dxqcond,x=kAc(dTdx)xqcond,x+Δx=kAc(dTdx)x+Δx  ,\begin{align} q_{\text{conv}} &= h P \left( T - T_{\infty} \right) dx \\ q_{\text{cond}, x} &= -k A_c \left(\frac{dT}{dx}\right)_{x} \\ q_{\text{cond}, x+\Delta x} &= -k A_c \left(\frac{dT}{dx}\right)_{x+\Delta x} \;, \end{align}

where PP is the perimeter (so that PdxP \, dx is the heat transfer area to the fluid) and AcA_c is the cross-sectional area.

Performing a balance through the control volume:

qcond,x+Δx=qcond,xqconvkAc(dTdx)x+Δx=kAc(dTdx)xhP(TT)dxkAcdTdxx+ΔxdTdxxdx=hP(TT)limΔx0:kAcd2Tdx2x=hP(TT)d2Tdx2=hPkAc(TT)d2Tdx2=m2(TT)\begin{align} q_{\text{cond}, x+\Delta x} &= q_{\text{cond}, x} - q_{\text{conv}} \\ -k A_c \left(\frac{dT}{dx}\right)_{x+\Delta x} &= -k A_c \left(\frac{dT}{dx}\right)_{x} - h P \left( T - T_{\infty} \right) dx \\ -k A_c \frac{\left.\frac{dT}{dx}\right|_{x+\Delta x} - \left.\frac{dT}{dx}\right|_{x}}{dx} &= -h P ( T - T_{\infty} ) \\ \lim_{\Delta x \rightarrow 0} : -k A_c \left. \frac{d^2 T}{dx^2} \right|_x &= -h P (T - T_{\infty}) \\ \frac{d^2 T}{dx^2} &= \frac{h P}{k A_c} (T - T_{\infty}) \\ \frac{d^2 T}{dx^2} &= m^2 (T - T_{\infty}) \end{align}

then we have as a governing equation

d2Tdx2m2(TT)=0  ,\frac{d^2 T}{dx^2} - m^2 (T - T_{\infty}) = 0 \;,

where m2=(hP)/(kAc)m^2 = (h P)/(k A_c).

We can obtain an exact solution for this ODE. For convenience, let’s define a new variable, θ\theta, which is a normalized temperature:

θTT\theta \equiv T - T_{\infty}

where θ=T\theta^{\prime} = T^{\prime} and θ=T\theta^{\prime\prime} = T^{\prime\prime}. This gives us a new governing equation:

θm2θ=0  .\theta^{\prime\prime} - m^2 \theta = 0 \;.

This is a 2nd-order homogeneous ODE, which looks a lot like y+ay=0y^{\prime\prime} + a y = 0. The exact solution is then

θ(x)=c1emx+c2emxT(x)=T+c1emx+c2emx\begin{align} \theta(x) &= c_1 e^{-m x} + c_2 e^{m x} \\ T(x) &= T_{\infty} + c_1 e^{-m x} + c_2 e^{m x} \end{align}

We’ll use this to look at the accuracy of a numerical solution, but we will not be able to find an exact solution for more complicated versions of this problem.

We can also solve this numerically using the finite difference method. Let’s replace the derivative with a finite difference:

d2Tdx2m2(TT)=0Ti12Ti+Ti+1Δx2m2(TiT)=0\begin{align} \frac{d^2 T}{dx^2} - m^2 (T - T_{\infty}) &= 0 \\ \frac{T_{i-1} - 2T_i + T_{i+1}}{\Delta x^2} - m^2 \left( T_i - T_{\infty} \right) &= 0 \end{align}

which we can rearrange into a recursion formula:

Ti1+Ti(2Δx2m2)+Ti+1=m2Δx2TT_{i-1} + T_i \left( -2 - \Delta x^2 m^2 \right) + T_{i+1} = -m^2 \Delta x^2 \, T_{\infty}

This gives us an equation for all the interior nodes; we can use the above boundary conditions to get equations for the boundary nodes. For the boundary condition at x=Lx=L, T(x=L)=0T^{\prime}(x=L) = 0, let’s use a backward difference:

T1=TbTnTn1Δx=0Tn1+Tn=0\begin{align} T_1 &= T_b \\ \frac{T_n - T_{n-1}}{\Delta x} = 0 \rightarrow - T_{n-1} + T_n &= 0 \end{align}

Combining all these equations, we can construct a linear system: AT=bA \mathbf{T} = \mathbf{b}.

Heat transfer with radiation

Let’s now consider a more-complicated case, where we also have radiation heat transfer occuring along the length of the fin. Now, our governing ODE is

d2Tdx2hPkAc(TT)σϵPhAc(T4T4)=0\frac{d^2 T}{dx^2} - \frac{h P}{k A_c} \left(T - T_{\infty}\right) - \frac{\sigma \epsilon P}{h A_c} \left(T^4 - T_{\infty}^4 \right) = 0

This is a bit trickier to solve because of the nonlinear term involving T4T^4. But, we can handle it via the iterative solution method discussed above.