Easy NODE and UDE
EasyNODE and EasyUDE provide a quick, simple alternative to the other model constructors featured by UniversalDiffEq.jl. They each return pre-trained models, in which neural networks are kept to one hidden layer. The models are trained using the gradient_descent! function.
EasyNODE constructors
UniversalDiffEq.EasyNODE — MethodEasyNODE(data,X;kwargs ... )When a data frame X is supplied the model will run with covariates. The argument X should have a column for time t with the value for time in the remaining columns. The values in X will be interpolated with a linear spline for values of time not included in the data frame.
kwargs
time_column_name: Name of column indataandXthat corresponds to time. Default is"time".variable_column_name: Name of column inXthat corresponds to the variables. Default isnothing.value_column_name: Name of column inXthat corresponds to the covariates. Default isnothing.hidden_units: Number of neurons in hidden layer. Default is10.seed: Fixed random seed for repeatable results. Default is1.proc_weight: Weight of process error $omega_{proc}$. Default is1.0.obs_weight: Weight of observation error $omega_{obs}$. Default is1.0.reg_weight: Weight of regularization error $omega_{reg}$. Default is10^-6.reg_type: Type of regularization, whether"L1"or"L2"regularization. Default is"L2".l: Extrapolation parameter for forecasting. Default is0.25.extrap_rho: Extrapolation parameter for forecasting. Default is0.0.step_size: Step size for ADAM optimizer. Default is0.05.maxiter: Maximum number of iterations in gradient descent algorithm. Default is500.verbose: Should the training loss values be printed?. Default isfalse.
UniversalDiffEq.EasyNODE — MethodEasyNODE(data;kwargs ... )Constructs a pretrained continuous-time model for the data set data using a single layer neural network to represent the system's dynamics.
kwargs
time_column_name: Name of column indatathat corresponds to time. Default is"time".hidden_units: Number of neurons in hidden layer. Default is10.seed: Fixed random seed for repeatable results. Default is1.proc_weight: Weight of process error $omega_{proc}$. Default is1.0.obs_weight: Weight of observation error $omega_{obs}$. Default is1.0.reg_weight: Weight of regularization error $omega_{reg}$. Default is10^-6.reg_type: Type of regularization, whether"L1"or"L2"regularization. Default is"L2".l: Extrapolation parameter for forecasting. Default is0.25.extrap_rho: Extrapolation parameter for forecasting. Default is0.0.step_size: Step size for ADAM optimizer. Default is0.05.maxiter: Maximum number of iterations in gradient descent algorithm. Default is500.verbose: Should the training loss values be printed?. Default isfalse.
Creating an UDE constructor using the EasyNODE function is equivalent to creating it using the NODE function and running gradient_descent!.
easy_model = EasyNODE(data)
#Is equivalent to running
model = NODE(data)
gradient_descent!(model)EasyUDE constructors
UniversalDiffEq.EasyUDE — MethodEasyUDE(data,derivs!,initial_parameters;kwargs ... )Constructs a pretrained UDE model for the data set data based on user defined derivatives derivs. An initial guess of model parameters are supplied with the initial_parameters argument.
kwargs
time_column_name: Name of column indatathat corresponds to time. Default is"time".hidden_units: Number of neurons in hidden layer. Default is10.seed: Fixed random seed for repeatable results. Default is1.proc_weight: Weight of process error $omega_{proc}$. Default is1.0.obs_weight: Weight of observation error $omega_{obs}$. Default is1.0.reg_weight: Weight of regularization error $omega_{reg}$. Default is10^-6.reg_type: Type of regularization, whether"L1"or"L2"regularization. Default is"L2".l: Extrapolation parameter for forecasting. Default is0.25.extrap_rho: Extrapolation parameter for forecasting. Default is0.0.step_size: Step size for ADAM optimizer. Default is0.05.maxiter: Maximum number of iterations in gradient descent algorithm. Default is500.verbose: Should the training loss values be printed?. Default isfalse.
UniversalDiffEq.EasyUDE — MethodEasyUDE(data::DataFrame,X,derivs!::Function,initial_parameters;kwargs ... )When a data frame X is supplied the model will run with covariates. The argument X should have a column for time t with the value for time in the remaining columns. The values in X will be interpolated with a linear spline for value of time not included in the data frame. When X is provided the derivs function must have the form derivs!(du,u,x,p,t) where x is a vector with the value of the covariates at time t.
# kwargstime_column_name: Name of column indataandXthat corresponds to time. Default is"time".variable_column_name: Name of column inXthat corresponds to the variables. Default is"variable".value_column_name: Name of column inXthat corresponds to the covariates. Default is"value".hidden_units: Number of neurons in hidden layer. Default is10.seed: Fixed random seed for repeatable results. Default is1.proc_weight: Weight of process error $omega_{proc}$. Default is1.0.obs_weight: Weight of observation error $omega_{obs}$. Default is1.0.reg_weight: Weight of regularization error $omega_{reg}$. Default is10^-6.reg_type: Type of regularization, whether"L1"or"L2"regularization. Default is"L2".l: Extrapolation parameter for forecasting. Default is0.25.extrap_rho: Extrapolation parameter for forecasting. Default is0.0.step_size: Step size for ADAM optimizer. Default is0.05.maxiter: Maximum number of iterations in gradient descent algorithm. Default is500.verbose: Should the training loss values be printed?. Default isfalse.
Unlike EasyNODE, running EasyUDE is not equivalent to running CustomDerivatives and gradient_descent!. EasyUDE creates UDE constructors with a continuous process model of the form
\[\frac{dx}{dt} = NN(x;w,b) + f(x;a).\]
where $f$ corresponds to the known_dynamics! argument, and $a$ is the initial_parameters argument in EasyUDE.
function known_dynamics!(du,u,parameters,t)
du .= parameters.a.*u .+ parameters.b #some function here
return du
end
initial_parameters = (a = 1, b = 0.1)
easy_model = EasyUDE(data,known_dynamics!,initial_parameters)
#Is equivalent to running
using Lux, Random
dims_in = 1
hidden_units = 10
nonlinearity = tanh
dims_out = 1
NN = Lux.Chain(Lux.Dense(dims_in,hidden_units,nonlinearity),
Lux.Dense(hidden_units,dims_out))
rng = Random.default_rng()
NNparameters, states = Lux.setup(rng,NN)
function derivs!(du,u,p,t)
C, states = NN(u,p.NN,states)
du .= C .+ a*u .+ b
return du
end
initial_parameters = (a = 1, b = 0.1)
model = CustomDerivatives(data,derivs!,initial_parameters)