python - Step with scipy integrator without going past final point -
the scipy integrators take 2 parameters when integrating: (1) step
tells integrator take single step, , (2) relax
tells integrator fine step past final time point.
i want take steps (so can collect entire solution), not step past final time (because have discontinuity in real problem). however, step
being true
seems ignore relax
being false
. take 1 step on final time abandon.
here simple example illustrating problem:
from scipy.integrate import ode obj = ode(lambda t, y: -y) # simple exponential decay obj.set_initial_value(4.0) final_time = 2.0 ts_new = [] ys_new = [] # take 1 step @ time until final time reached while obj.t < final_time: y_new = obj.integrate(final_time, step=true, relax=false) ts_new.append(obj.t) ys_new.append(y_new) print(ts_new[-1]) # 2.073628416585726
i have expected last step partial step (is it's called?) , stop on final_time
, such value given obj.integrate(final_time, step=false, relax=false)
. other rerunning entire ode, there way last point? fine setting step
false
before last step, see no way know if step on final time until does.
a bad solution set max_step
final_time - obj.t
@ each step. tempting because work, performance horrendous.
from scipy.integrate import ode obj = ode(lambda t, y: -y) # simple exponential decay obj.set_initial_value(4.0) final_time = 2.0 ts_new = [] ys_new = [] # take 1 step @ time until final time reached while obj.t < final_time: obj.set_integrator('vode', max_step=final_time-obj.t) y_new = obj.integrate(final_time, step=true, relax=false) ts_new.append(obj.t) ys_new.append(y_new) print(ts_new[-1]) # 2.0
note stops @ correct time. however, appears call set_integrator
rebuilds integrator, causes colossal slowdown. takes 27 steps integrate system in question, 36215 steps integrate system in answer. not possible avoid rebuilding integrator directly mutating obj._integrator.max_step
, appears copy of fortran parameter , has no effect on integration.
Comments
Post a Comment