Easy NODE and UDE

EasyNODE and EasyUDE provide a quick, simple alternative to the other model constructors featured by UniversalDiffEq.jl. They each return pre-trained models, in which neural networks are kept to one hidden layer. The models are trained using the gradient_descent! function.

EasyNODE constructors

UniversalDiffEq.EasyNODEMethod
EasyNODE(data,X;kwargs ... )

When a data frame X is supplied the model will run with covariates. The argument X should have a column for time t with the value for time in the remaining columns. The values in X will be interpolated with a linear spline for values of time not included in the data frame.

kwargs

  • time_column_name: Name of column in data and X that corresponds to time. Default is "time".
  • variable_column_name: Name of column in X that corresponds to the variables. Default is nothing.
  • value_column_name: Name of column in X that corresponds to the covariates. Default is nothing.
  • hidden_units: Number of neurons in hidden layer. Default is 10.
  • seed: Fixed random seed for repeatable results. Default is 1.
  • proc_weight: Weight of process error $omega_{proc}$. Default is 1.0.
  • obs_weight: Weight of observation error $omega_{obs}$. Default is 1.0.
  • reg_weight: Weight of regularization error $omega_{reg}$. Default is 10^-6.
  • reg_type: Type of regularization, whether "L1" or "L2" regularization. Default is "L2".
  • l: Extrapolation parameter for forecasting. Default is 0.25.
  • extrap_rho: Extrapolation parameter for forecasting. Default is 0.0.
  • step_size: Step size for ADAM optimizer. Default is 0.05.
  • maxiter: Maximum number of iterations in gradient descent algorithm. Default is 500.
  • verbose: Should the training loss values be printed?. Default is false.
source
UniversalDiffEq.EasyNODEMethod
EasyNODE(data;kwargs ... )

Constructs a pretrained continuous-time model for the data set data using a single layer neural network to represent the system's dynamics.

kwargs

  • time_column_name: Name of column in data that corresponds to time. Default is "time".
  • hidden_units: Number of neurons in hidden layer. Default is 10.
  • seed: Fixed random seed for repeatable results. Default is 1.
  • proc_weight: Weight of process error $omega_{proc}$. Default is 1.0.
  • obs_weight: Weight of observation error $omega_{obs}$. Default is 1.0.
  • reg_weight: Weight of regularization error $omega_{reg}$. Default is 10^-6.
  • reg_type: Type of regularization, whether "L1" or "L2" regularization. Default is "L2".
  • l: Extrapolation parameter for forecasting. Default is 0.25.
  • extrap_rho: Extrapolation parameter for forecasting. Default is 0.0.
  • step_size: Step size for ADAM optimizer. Default is 0.05.
  • maxiter: Maximum number of iterations in gradient descent algorithm. Default is 500.
  • verbose: Should the training loss values be printed?. Default is false.
source

Creating an UDE constructor using the EasyNODE function is equivalent to creating it using the NODE function and running gradient_descent!.

easy_model = EasyNODE(data)

#Is equivalent to running
model = NODE(data)
gradient_descent!(model)

EasyUDE constructors

UniversalDiffEq.EasyUDEMethod
EasyUDE(data,derivs!,initial_parameters;kwargs ... )

Constructs a pretrained UDE model for the data set data based on user defined derivatives derivs. An initial guess of model parameters are supplied with the initial_parameters argument.

kwargs

  • time_column_name: Name of column in data that corresponds to time. Default is "time".
  • hidden_units: Number of neurons in hidden layer. Default is 10.
  • seed: Fixed random seed for repeatable results. Default is 1.
  • proc_weight: Weight of process error $omega_{proc}$. Default is 1.0.
  • obs_weight: Weight of observation error $omega_{obs}$. Default is 1.0.
  • reg_weight: Weight of regularization error $omega_{reg}$. Default is 10^-6.
  • reg_type: Type of regularization, whether "L1" or "L2" regularization. Default is "L2".
  • l: Extrapolation parameter for forecasting. Default is 0.25.
  • extrap_rho: Extrapolation parameter for forecasting. Default is 0.0.
  • step_size: Step size for ADAM optimizer. Default is 0.05.
  • maxiter: Maximum number of iterations in gradient descent algorithm. Default is 500.
  • verbose: Should the training loss values be printed?. Default is false.
source
UniversalDiffEq.EasyUDEMethod
EasyUDE(data::DataFrame,X,derivs!::Function,initial_parameters;kwargs ... )

When a data frame X is supplied the model will run with covariates. The argument X should have a column for time t with the value for time in the remaining columns. The values in X will be interpolated with a linear spline for value of time not included in the data frame. When X is provided the derivs function must have the form derivs!(du,u,x,p,t) where x is a vector with the value of the covariates at time t.

# kwargs
  • time_column_name: Name of column in data and X that corresponds to time. Default is "time".
  • variable_column_name: Name of column in X that corresponds to the variables. Default is "variable".
  • value_column_name: Name of column in X that corresponds to the covariates. Default is "value".
  • hidden_units: Number of neurons in hidden layer. Default is 10.
  • seed: Fixed random seed for repeatable results. Default is 1.
  • proc_weight: Weight of process error $omega_{proc}$. Default is 1.0.
  • obs_weight: Weight of observation error $omega_{obs}$. Default is 1.0.
  • reg_weight: Weight of regularization error $omega_{reg}$. Default is 10^-6.
  • reg_type: Type of regularization, whether "L1" or "L2" regularization. Default is "L2".
  • l: Extrapolation parameter for forecasting. Default is 0.25.
  • extrap_rho: Extrapolation parameter for forecasting. Default is 0.0.
  • step_size: Step size for ADAM optimizer. Default is 0.05.
  • maxiter: Maximum number of iterations in gradient descent algorithm. Default is 500.
  • verbose: Should the training loss values be printed?. Default is false.
source

Unlike EasyNODE, running EasyUDE is not equivalent to running CustomDerivatives and gradient_descent!. EasyUDE creates UDE constructors with a continuous process model of the form

\[\frac{dx}{dt} = NN(x;w,b) + f(x;a).\]

where $f$ corresponds to the known_dynamics! argument, and $a$ is the initial_parameters argument in EasyUDE.

function known_dynamics!(du,u,parameters,t)
    du .= parameters.a.*u .+ parameters.b #some function here
    return du
end
initial_parameters = (a = 1, b = 0.1)
easy_model = EasyUDE(data,known_dynamics!,initial_parameters)

#Is equivalent to running
using Lux, Random
dims_in = 1
hidden_units = 10
nonlinearity = tanh
dims_out = 1
NN = Lux.Chain(Lux.Dense(dims_in,hidden_units,nonlinearity),
                Lux.Dense(hidden_units,dims_out))

rng = Random.default_rng()
NNparameters, states = Lux.setup(rng,NN)

function derivs!(du,u,p,t)
    C, states = NN(u,p.NN,states)
    du .= C .+ a*u .+ b
    return du
end

initial_parameters = (a = 1, b = 0.1)
model = CustomDerivatives(data,derivs!,initial_parameters)