Faculty of Medical and Health Sciences
Department of Pharmacology & Clinical Pharmacology, University of Auckland
Faculty of Medical and Health Sciences
Department of Pharmacology & Clinical Pharmacology, University of Auckland

Absorption and disposition

[Hide side menu]

Objective and introduction

Objectives

The objectives are:

  1. Define common models of absorption using closed form solutions and differential equations.
  2. Find out how to compare the fit of models to the same data.

Introduction

Pharmacokinetic models have three main components:

  1. Input: Drug inputs are commonly described as:
    1. Bolus
    2. Zero order (Tk0)
    3. First Order (Tabs or Ka)
    A lag time (Tlag) may be used with any of these input models.

  2. Distribution: Most commonly defined in terms of the number of compartments. Distribution is usually a first order process. The key parameters of distribution are the volumes of distribution (V1, V2, Vss) for each compartment and inter-compartmental clearances (Clic1,..) between the central compartment and peripheral compartments of a mammillary model.
  3. Elimination: Drug elimination is characterised by models for clearance:
    1. First-order (CL)
    2. Mixed-order (Vmax, Km)

Absorption: definition and physiological determinants

Absorption is the name given to the process that determines how much of an administered drug enters the body. Absorption may be local or systemic in terms of the physiological process and the resulting effects. Absorption kinetics generally describe the process of the drug entering into the central compartment (circulation). Absorption is determined by multiple different processes, and is commonly described by two factors: time independent = Extent (F) and time dependent = Rate.

Disposition

The term disposition is from the verb 'to dispose' or to get rid of. In pharmacology, disposition refers to what happens to the drug after it enters the body, including distribution, and elimination processes. This encompasses clearance and accumulation (i.e. excluding absorption).

Bioavailability

The term bioavailability (F) refers to the extent of absorption. Bioavailability may be described by two processes: Fraction absorbed (f) and First pass extraction ratio (ER). Fraction absorbed is determined by blood flow, the physiochemical properties governing diffusion, intraluminal metabolism, and transport processes including secretion. ER is determined by organ blood flow and metabolism. The extent of absorption is estimated as the area under the concentration time curve (AUC), and this is useful for comparing the bioavailability of generic drugs. AUC values can also be used to determine initial values of parameters for first order processes of elimination (Clearance = CL = dose/AUC), and absorption (Bioavailability = F = AUC test/AUC reference). AUC estimations can involve error due to extrapolation from beyond the last data point, and so their accuracy is often limited by inadequate data (1, p. 67).

To convert an intravenous dose to an equivalent oral dose it is necessary to divide by bioavailability (F). For example, the bioavailibilty of oral morphine varies from 15-30%. This equates to an IV:Oral dose ratio of 1:3 for chronic opiate users, and 1:6 for the opiate naive.

Absorption rate depends on the route of administration, the type of input, physiological processes (blood flow, gastric emptying), physiochemical factors and drug formulation. There are 3 types of input processes describing rate of absorption: bolus, zero-order, or first order. Bolus input implies instantaneous absorption. Zero order input means a constant rate of input, eg intravenous infusion. Repeated intermittent oral dosing also achieves the same average steady state concentrations as a constant rate infusion, and accumulates at the same rate, but with higher peak concentrations and swings between peak and trough concentrations. First order input absorption occurs at a rate proportional to the amount of drug, and the rate changes with time, typically decreasing over time (eg. intramuscular, intestinal absorption).

Absorption rate is by definition time dependant, and the term Tk0 is used to describe duration. Tmax (time to peak concentration) and Cmax (peak concentration) are used to describe rate of absorption to allow comparison between generic medicines. Half life of absorption predicts the time to peak concentration, and will be considered further in the next section.

The site of absorption of an oral preparation is an important consideration. Few drugs are absorbed in the stomach itself (with alcohol being an exception). Most orally administered drugs are absorbed from duodenum, so that gastric emptying acts as a physiological control on the rate of absorption and may effectively result in zero order infusion kinetics. Slow release formulations represent a pharmacological control mechanism of rate of absorption that may also result in zero order input. Delays in drug absorption may be due to a variety of processes including gastric emptying and formulation and can be coded in a model by describing the effect of lag time (Tlag).

Different considerations apply to non-intestinal sites of drug administration and absorption. Drugs can be administered by almost any route imaginable, even intrapleural and intraperitoneal. Systemic absorption is usually the goal of rectal, buccal, subcutaneous and intramuscular administration. Local absorption is usually the main objective of intraarticular, spinal, epidural, and topical (conjunctival, cutaneous, mucosal), although some degree of systemic absorption may occur. Systemic absorption can occur from specific trancutaneous formulations (eg. nicotine and Fentanyl patches). Rectal absorption processes can be particularly complex to predict depending on how much enters the portal circulation and the impact of first pass metabolism.

Half life

Half life is the time taken for a first order process to be 50% complete. This process could be absorption, elimination or accumulation. The natural logarithm of two (ln (2)) is used to calculate half life, and may be approximated to 0.7. In a first order process, an exponential function predicts time course. After one half-life the process is 50% complete, after 2 half-lives 75%, 3 half-lives 87.5% , 4 half-lives 93.75%, 5 half-lives 96.875 and so on. A first order process can be considered relatively complete after 4 half-lives.

Absorption half life is determined by Ln2/Ka, where Ka is the proportionality constant that relates drug amount and rate of absorption. Absorption half lives are typically short eg. 30 minutes. Time to peak effect (Tmax) is approximately 3 x absorption half life, (where peak effect is the effect at peak concentration).

Elimination or accumulation half-lives have the same time value, but describe different processes. In a simple one compartment model, half life is determined by clearance and volume of distribution, T1/2= ln2*Vd/CL. In this way half life may be thought of as a proportionality constant describing volume of distribution (Vd) and clearance (CL). Thus half life may not change despite changes in Vd and CL if they change proportionally. It is often preferable to estimate clearance and volume as primary parameters, and then derive half life as a secondary parameter.

Accumulation during constant infusion or intermittent oral dosing is time dependant and determined by half life. After one half life, a drug will accumulate to 50% of steady state. Steady state will be reached after approximately 4 half-lives. Accumulation rate will be the same whether a drug is given by repeated bolus doses or constant infusion as long as the same total amount of drug is administered over a given time period. The rate of accumulation is the same, since rate = amount of drug/time, but there will be more peaks and troughs for bolus doses.

Accumulation for repeated doses is determined by dosing interval (dosint) and half life. This can be described by an equation for accumulation factor: AF = 1/(1-EXP(-CL/Vd. Dosint))

If the dosing interval is equal to half life then the accumulation factor is 2, which means that the concentration at steady state is twice the concentration after the first dose. So accumulation factor is the ratio of concentration at steady state to concentration after first dose. This only applies at the same time after the dose, eg concentration measured 10 minutes after 1st dose, and 10 minutes after dose occurring at steady state (4 half lives).

Elimination half life refers to the time for plasma concentration to reduce by 50% after the input stops. Elimination is usually a first order process for most drugs, but may be mixed (eg. alcohol). Context sensitive half time (CSHT) is a concept that relates duration of infusion and elimination processes once the infusion stops. It is influenced by the physiochemical properties of the drug and distribution as well as clearance. It is often discussed with respect to prolonged infusions of lipid soluble drugs such as fentanyl in anaesthesia an intensive care practice, but will not be considered further here.

Rate of input and output and differential equations

Models may be defined in different mathematical forms. Often the simplest analytical solution involves the use of closed form equations. Explicit equations express the dependent variable as a function of the independent variable, constants and parameters of the model (1, p. 35). Implicit equations include the dependent variable (eg. as initial drug concentration), for example Michaelis -Menten type processes (1, p. 38).

Differential equations are often required for iteration of more complex models. Differential equations describe the rate of change of a variable with time. This is commonly expressed as d/dt (X), which is the differential of X with respect to time, where X is a dependant variable. Either amount or concentration may be used, as long as this is consistent for the equation or series of equations. If amount is used, concentration needs to be specified as amount/volume. The initial or starting amount or concentration of each variable needs to be specified for each compartment considered.

The rate of change of a variable is equal to the inputs â€' outputs. At steady state the rate of input is equal to the rate of output. Rate of input is described by administration method and absorption kinetics: bolus, zero order, or 1st order or a combination. Rate of output is described by elimination models: 1st order or mixed order. Mixed order processes, incorporate both constant and maximum rate of a process.

As a form of mathematical description differential equations can be helpful for conceptualizing physiological processes and more complex models e.g. effect compartments, combinations of different types of input, and multiple compartment models. Inter-compartmental clearance is generally a first order process. The structural model and number of compartments is determined not only the properties of the drug, but from a modelling perspective determined by the particular data. Different structural models (single or multiple compartments) may be required to give the best fit for the particular data set. More complicated models are not necessarily more valid.

Differential equations are transformed by different methods to produce integrated equations. Pharmacometric software enables progressive calculation of integrated values from differential equations. This can be time consuming for more complex models. Various different methods of solving differential equations in a stepwise fashion exist, for example Runge-Kutta. Modifications of this method allow improved accuracy (eg. determination of step size) (1, p. 53).

The use of nested conditional expressions (if then else endif blocks) within the model may allow the differential equations to be solved more quickly. Examples in this assignment include expressions defining lag time and duration of input.

Evaluating a model: methods of comparing goodness of fit

How and why do we attempt to choose 'the best model' if all models are wrong? The 'rightness' of a model is a relative not absolute concept. What is the main objective of the model? For example parameter estimation, covariate evaluation, or assessment of between subject variability? The clinical relevance and applicability of the model is important in evaluating its usefulness, its 'fit' for purpose. Pharmacometrics by definition entails measurement and quantification, but to some extent subjective evaluation is also important. Before embarking on model building it is helpful to have clear objectives and a plan for evaluation.

A useful model describes a particular data set well, and best predicts and explains the relationship of the variables in the system. The simplest methods of fitting mathematical models to data involve drawing a line of best fit through observed data points (e.g. 'eyeball method'). Pharmacometric modelling involves adjusting parameter estimates to best fit the model to the data (1, p. 11). During modelling, parameters are determined by nonlinear regression. Parameters may be adjusted, added or subtracted. Constants remain fixed (doses, infusion times). The stability of the model can be assessed by rerunning it with different initial parameter estimates. More complex models are not necessarily better, and may involve more error. Evaluating a model is necessary to avoid overparameterisation.

Most simply model evaluation involves comparison between observed and predicted data (eg. concentration values), commonly represented in graphical form. This is termed 'Goodness of Fit' (GOF). The characteristics of a model can be varied to predict values that are more likely to reflect true population values. Of course the true population values can never be entirely known, but comparison with observed sample values provides a reference. Simulation and bootstrapping are used to evaluate the robustness and sensitivity of the model.

The data set being modelled will determine the structure of the model, covariate selection, and parameter estimates. For example, different models for the same drug may vary in the number of compartments, the covariates, and the parameter estimates. This is because different data sets may vary in terms of the characteristics of the population sampled, the time course observed by sampling, and errors involved. So to compare and discriminate between models the same data set must be used.

Evaluation of a model occurs throughout the modelling process to enable selection of the model, and inclusion of covariates. This involves the assessment of the accuracy, precision and variability of the observed and predicted data and estimates. As well as checking the fit to the data, evaluation includes assessment of parameter estimates, the influence of covariates and the estimation of error and variability. In evaluating parameter estimates we consider estimates of their variability and error, comparison with literature values and consideration of empirical models. Biological plausibility is also an important consideration in evaluating covariate selection.

Evaluation may be the most difficult part of the modelling process. There is no one optimal test of the validity of a model and a variety of methods are generally used. In evaluating the fit of a model to the data, we need measures to describe both how well it fits, and how and where it does not fit. These two concepts of 'fit' may require different methods of evaluation: 'lack of fit' as well as goodness of fit. So as well as overall goodness of fit, model evaluation ideally involves specific diagnosis of what the problems are with the model, and where they exist. Different parts of model need to be evaluated, for example the fixed and random effects with NONMEM.

The output of pharmacometric software programs provides a large array of information describing the fit of the data. Two main methods are used to evaluate a model, and determine how well it fits the data set: using graphical representations and statistical assessment.

Graphs for evaluating models Scatter plots are commonly used to evaluate goodness of fit and to detect differences between models. GOF plots are probably more important tools of model evaluation then statistical tests such as objective function. A model with a lower objective function is not necessarily if better if there is no difference in goodness of fit or parameter estimates. Graphical assessment of models is useful to diagnose 'lack of fit' and specifically where the problem is. Outliers can be identified from visual inspection, and there are different approaches that may be used to deal with these deviations from expected values.

Some standard diagnostic plots

PRED, DV vs Time Graphs of observed and predicted values for the dependant variable (eg concentration) versus time area simple and powerful tool. Consistent amplitude and shape of the data should be a feature if the model is a good fit.

PRED vs DV Graphs can be constructed of predicted vs. observed dependant variable (eg. concentration). If the fit is good data should be clustered uniformly and closely around a line of unity. The time course is lost in this graph.

Visual predictive check. Posterior or visual predictive checks (VPC) compare predicted with observed data (comparing 95% prediction interval). They are more complex to perform and involve simulation from final parameter estimates, then comparison of the distribution of observations over time within the simulated distribution (95% CI). VPCs are a relatively new method of model evaluation but are arguably one of the most robust forms of diagnostic assessment (2).

Statistical methods of model evaluation. A number of statistical parameters are generated by each nonlinear regression program. Statistical tests commonly used to evaluate and discriminate between models include objective function values, standard error and coefficient of variation. In general terms, the model with smallest numerical value of the test parameter (minimum value) represents 'the best model' for the data, i.e. the best fit. The statistical significance levels are set in advance. For example a reduction in objective function of around 4% is commonly used (approximately equivalent to p < 0.05).

Wings for NONMEM output provides a Coefficient of Variation (CV%) for each parameter, estimating its variability. High CV > 10 or 20% or wide confidence intervals may result from under or overparameterization; or problems with the selection, error and number of data points (1, p. 96). If the estimates of standard error are very small then they are of limited value due to the sensitivity of this method to changes in the means.

Methods of model evaluation

Correlation coefficients provide limited information about goodness of fit. Coefficients range from zero to one, with one indicating a perfect correlation or fit. Good fits usually have high R values, but the inverse is not always true (3, p. 322). Correlation is less useful for nonlinear methods, where values > 0.9 may be calculated despite a visually poor fit (1, p. 108). Another disadvantage of correlation is that is does not provide any information about the absolute values.

Plots of residuals and weighted residuals can be helpful for choosing error models and for evaluating models, but are probably of limited value when presented in isolation without the other GOF plots discussed above.

A residual is the difference between an observed and calculated value for the dependant variable (3, p. 345).

Residual = Yobs - Ycalc

RES vs PRED: This plot should include zero line and trend line. The shape of distribution of predicted response around residual zero line suggests the error model: uniform shape suggests additive error model, fanning suggests proportional error model. The shape may also suggest if a weighting scheme is needed to deal with overestimation of large concentrations and underestimation of small concentrations.

WRES vs PRED: The shape of distribution of predicted concentration response should be uniform around the residual = 0 line, and within 3 SD.

RES vs Time: May reveal regions not well explained by model, for example where data such as sampling time is missing.

WRES vs TIME: The plot of weighted residual vs. time should be randomly dispersed around the residual zero line.

WRES vs ID: This plot can be useful to assess outliers and their impact on the model.

Subjective evaluation may be as important as objective assessment of a model. Hence the 'rightness' of a model depends on the perspective of the modeller, the population, and the purpose of the model. Other important considerations are the biological plausibility and the simplicity of the model (parsimony principle).

Study design to ensure sampling optimizes the number and timing of data point collection may improve model fit and accuracy of parameter estimation.

Where the results of goodness of fit of plots and statistical tests differ, more emphasis should be place on the graph of overlaying values of predicted and observed concentrations versus time.

Subjective assessment remains important, especially when faced with conflicting information about goodness of fit. However visual perceptual errors and bias can lead to errors in the interpretation of Goodness of fit plots.

Modelling involves the creation of a model that fits the particular data set.

Bonate "A model defines how you think your data were generated" (4, p. 1).

Introduction by Anita Sumpter (2008).

References

  1. Bourne D. Mathematical Modeling of Pharmacokinetic Data. CRC Press, Boca Raton, 1995.
  2. Nick Holford. The Visual Predictive Check - Superiority to Standard Diagnostic (Rorschach) Plots. PAGE 14. 2005; Abstr 738. URL: www.page-meeting.org/?abstract=738
  3. Schoenwald RD. Pharmacokinetics in Drug Discovery and Development. CRC Press, Boca Raton 2002. 
  4. Bonate PL. Pharmacokinetic-pharmacodynamic modelling and simulation. Springer, New York, 2005. 

Workshop hints

Note: All files should be loaded from and saved to your Pharmacometrics Data\Absorption and Disposition folder for this assignment.

Excel

Find the file Pharmacometrics Data\Absorption and Disposition\kak0.xls

  1. Open kak0.xls with Excel.
  2. Look at each of the 3 worksheets (ka1, ka1L, k01L).
  3. Identify:
    1. The model code
    2. The model parameters
    3. The independent variable

Berkeley Madonna

Closed Form Equation for Absorption Model

  1. Open Berkeley Madonna shortcut in folder 'Pharmacometrics Programs'
  2. Add code to the Equations window defining the STARTTIME and STOPTIME, the output interval (DTOUT), the model parameters and the model equation (Figure 1).
  3. Click on Run to run the model.
  4. Save your model in a 'Absorption and Disposition\Madonna' folder with the name ka1.mmd
  5. View a table of times and concentrations by clicking on the Table icon in the Run 1 graph window.
METHOD RK4
STARTTIME = 0
STOPTIME=10
DT = 0.02
DTOUT=1
Dose=100
CL=3
V=10
Tabs=1
Ka=logn(2)/Tabs
Conc=Dose*Ka/V/(Ka-CL/V)*(EXP(-CL/V*Time)-EXP(-Ka*Time))
 Figure 1. Code for ka1.mmd

Differential Equation for Absorption Model

  1. Save the ka1.mmd model with the new name ka1de.mmd.
  2. Change the model code as shown in Figure 2.
  3. Click on Run to run the model.
  4. View a table of times and concentrations by clicking on the Table icon in the Run 1 graph window.
METHOD RK4
STARTTIME = 0
STOPTIME=10
DT = 0.02
DTOUT=1
Dose=100
CL=3
V=10
Tabs=1
Ka=logn(2)/Tabs
init(Gut)=Dose
init(Conc)=0
d/dt(Gut)= - Gut*Ka
d/dt(Conc)=(Gut*Ka - CL*Conc)/V  

 Figure 2. Code for ka1de.mmd

Monolix

  1.  Copy  the TIME and CONC data from the ka1 simulation in kak0.xls to .csv format (ka1.csv) and save in the folder 'Absorption and Disposition'. 

    The column headers for all data files will be:

    #ID TIME AMT DV
    You will need to insert a row at time zero and put the dose into the AMT column in this row and make DV equal to '.'


    #ID TIME AMT CONC
    1
    0
    100
    .
    1
    0
    .
    -1.9
    1
    0.25
    .
    0.2
    1
    0.5
    .
    3.4
    1
    0.75
    .
    2.3
    1
    1
    .
    5.3
    1
    1.5
    .
    4.9
    1
    2
    .
    3.4
    1
    3
    .
    4.4
    1
    4
    .
    3.2
    1
    6
    .
    4.2
    1
    8
    .
    3.3
    Table 1. ka1.csv

    When you are ready to work with the ka1L and k01L data, save the simulated data as csv files in the 'Absorption and Disposition' folder.

  2. Create a new folder 'Monolix'  in your 'Absorption and Disposition' folder.
  3. Click on the 'New Project' icon.
  4. In MONOLIX, click 'Data' and use 'Browse' to locate your ka1.csv file and click open. 
  5. Click 'The structural model' to view the model library. Go to 'Load from library' and navigate to and select the oral1_1cpt_kaVCl model. Click accept.

    For each of the three simulated data sets (ka1, ka1L and k01L), you should perform parameter estimates with three models:

    KA1 model: oral1_1cpt_kaVCl
    KA1L model: oral1_1cpt_TlagkaVCl
    K01L model: oral0_1cpt_TlagTk0VCl

  6. Under the 'Initial Estimates' tab, set the 'Residual error parameters' to the value of reserr in the Excel file kak0.xls file.
  7. Change the initial parameter estimates by editing their values in the 'Check Initial Estimates' tab. A plot of predictions based on the model and initial parameter estimates is displayed along with observed values. Once these are reasonable, click 'Set as initial values'.
  8. Click on 'Statistical Model and Tests'.
  9. Set the 'Stand. dev. of the random effects' to 0 for each parameter by selecting 'None' under 'Random Effects'.  Because these are data from a single individual, there is no between subject variability (random effects). The SD of random effects is therefore 0.
  10. Under Observation model, set the error model to combined1 and the distribution to normal.
  11. IMPORTANT: Save the project as KA1_ka1_project.mlxtran in 'Absorption and Disposition\Monolix\KA1' folder. You wiill need to make a new folder for each of the 3 data sets (i.e. KA1, KA1L, K01L).
  12. Set the task options at the top  of the 'Statistical model & Tasks' tab by ticking 'Standard Err', 'Likelihood' and 'Plots'.
  13. Click on the grey bar next to the blue part of 'Plots' and click to enable 'Observed data', 'Individual fits' and 'SAEM'. Disable all other options. Close the 'Plots' options window.
  14. Estimate the parameters by clicking on 'Run'. This will take a while depending on the complexity of the model. During the estimation process you can see how the parameter estimates are being changed and settle down towards the final value.
  15. When the estimation finishes close the Monolix Scenario window. 
  16. Click on 'Plots' and look at the individual fit. Use the 'Display' options to show 'Predicted Median' and 'Prediction Interval' for the 'Individual predictive check'. You can alter the names of plot axes using the ‘Axes’ tab. When you are ready, you can export all of your plots together into your project folder automatically using the Export option in the main menu.
  17. View the parameter estimates by clicking 'Results'. A text file containing these results is saved in a 'pop_parameters.txt' in the project folder.

    Make up each project name as the combination of the model e.g. KA1 and the data source e.g. ka1L project name for the remaining models. After you have used all three models with the ka1 data, create a folder for the ka1L data and then for the k01L data.

    At the end of the assignment, you should have 9 Monolix projects in the Monolix folder.

    Take care to save each Monolix workspace with a name that will clearly identify, to you, the model and data type. These instructions suggest doing so by using seperate folders for each simulated data set, containing three models each.

Learning

  1. Discuss methods of comparing models for goodness of fit.
  2. Find out what the log-likelihood, AIC and BIC are (displayed in pop_parameters.txt in the results folder for each Monolix model).
  3. Compare the results of fitting the 3 sets of data (ka1, ka1L, k01L) using each of the 3 models (a total of 9 sets of parameter estimation).