⇐ ⇒

[CF-metadata] point observation data in CF 1.4

From: Ben Hetland <ben.a.hetland>
Date: Thu, 11 Nov 2010 16:14:54 +0100

On 11.11.2010 14:02, Bruce Wright posted on behalf of Dave Thomson:
> In response to Chris Barker (2/Nov): We certainly vary the time
> step between particles in our models but usually as a sub-step.
> I agree there is no strong need to be able to store the substeps.

I also think this is an acceptable "limitation" with our models. We
sometimes have multiple time steps when multiple (sub-)models run in
parallel (and interact with each other), but this isn't necessarily
"sub-stepping". In any of these cases, the _output_ time step (which is
what we would store in a netCDF file) is not necessarily the same as the
models' time steps. (Eg., we let the model calculate at 5 minute
intervals, but we might choose to record the results maybe only once
every hour.)

It would be great if a CF data set would support multiple independent
time scales in the same file, because then we could archive results of
multiple models together when they actually have been running together.
(They are dependent on each other.) Common dimensions, variables and
meta information could then be easily shared and consistent. However, it
is not critical if we would have to resort to one netCDF file per time
scale.

What would provide a "critical" limitation for us, however, would be a
requirement that the time steps have a constant interval between them
(time step length == constant). I have cases here were they do vary.



> Response to Jon Caron (4/Nov, later post): 20K particles is very
> small for us. 4M is not uncommon and this is likely to increase
> in the future.

This should definitely be taken into consideration in the "design" of
CF. We generally experience that the number of particles is pushed to
the limit of what the user can be patient enough to wait for given the
available machine resources at any specific point in time. (The model is
often run on regular desktop or laptop computers.) Since 4M and beyond
is already indicated, there's at least no reason to accept a 32-bit
limitation in indexing of a CF-conformant file.



> Response to Ben Hetland (4/Nov): "if one wants to jump directly to
> particle p at timestep t, then a simple prop(index(t)+p) should do the
> lookup" assumes that all particles are stored at each time step. In
> practice for a long run with continuously emitting sources, particles
> are created and destroyed.

Yes, this is a good point! I was implicitly assuming that 'p' would be
the index within the range of the stored number of particles at the
given time 't' only, but realize this might not always hold true when
referring to a "particle p" in general.

We also have particles created and destroyed like you mention, but then
the index slots in the arrays are often reused. I.e., we cannot in
general say that particle p at time t1 is the same as particle p at time t2.


Thanks for your comments!
-- 
	-+-Ben-+-
Opinions expressed are my own, not necessarily those of my employer.
Received on Thu Nov 11 2010 - 08:14:54 GMT

This archive was generated by hypermail 2.3.0 : Tue Sep 13 2022 - 23:02:41 BST

⇐ ⇒