[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Orekit Developers] SolidTides Performance



Hu Evan,

Le 30/10/2013 20:46, Evan Ward a écrit :
> I have updated spherical harmonics to the new API (fa31260). The new performance numbers are:

I have looked at the change and like it a lot, thanks for this.

I first thought that it should include also mu and ae (or a reference to
the provider) when reading the use from Eckstein-Heschler because I
found the constructor with both provider and harmonics cumbersome. Then
I noticed it was a private constructor and that from the public API
everything was simple, so my first concerns were wrong.

I also see that the new harmonics are time-stamped and avoid storing
lots of data when possible. I particularly like the only storage of the
date offset in the trend and of the only two precomputed sine and cosine
in the pulstating field, it will surely reduce computation needs for
fields like eigen 6 and similar where hundreds of coefficients depend on
only a few peridoc terms (one with six months period and one with one
year period).

> 
>                 Old  New
> single thread   1.3  1.7 s
> many threads   72    5   s
> 
> The concurrent performance is about what I expected, but the sequential performance is slightly worse. I'm not sure why, but it is 0.3 seconds spread of 100,000 force model evaluations, which is probably not worth worrying about.

This is a nice improvement and I agree the single thread case is still
good enough.

> 
> The change does break compatibility with the 6.0 API.

Yes, but I don't think it is too bad. There are several other places
were compatibility has been broken but they are similar to this one:
what is changed is inside a class users will almost never use or
implement themselves. They will simply retrieve the provider from the
factory and pass it to the force models, just as it is done currently
with 6.0.

As an additional protection, in this case I have added back the former
methods, with proper deprecation markers. So the very rare users who may
have called these methods in their own applications will still be able
to use them for a while. Do you agree with this change?

Thanks a lot for this contribution.
best regards,
Luc

> 
> Regards,
> Evan
> 
> On 10/24/2013 09:31 AM, Evan Ward wrote:
>> On 10/23/2013 10:38 AM, MAISONOBE Luc wrote:
>>> Evan Ward <evan.ward@nrl.navy.mil> a écrit :
>>>
>>>> Hi,
>>> Hi Evan,
>>>
>>>> I noticed 7dd5403 made a big improvement (2x!) in performance of
>>>> SolidTides.
>>> You are always really fast to notice this sort of things!
>>>
>>>> Since the class was also made thread safe I quickly checked
>>>> its performance with multiple threads using it at the same time. For
>>>> evaluating the tides field in SolidTidesTest 100000 times I got these
>>>> results:
>>>>
>>>> single thread   1.3 s
>>>> many threads   72   s
>>> Thanks for checking. This is odd, and should be improved.
>>>
>>>> I think the big slow down with many threads comes from the layer of
>>>> caching and locking implemented in TidesField. If concurrent performance
>>>> is important for SolidTides then one solution is to eliminate contention
>>>> for that cache by letting the user handle it explicitly.
>>> I am not really sure concurrency is really important for solid tides.
>>> Having them becoming thread safe was a side effect of setting up the
>>> cache. If it is too costly, we can remove it. As this is a force model
>>> used within a numerical propagator and numerical propagators are not
>>> thead-safe (and will almost certainly never be), it is not a big deal
>>> to remove it.
>> I think there could be some benefit to having thread safe force models.
>> Then some of the data structures could be shared between several
>> sequential propagators. In this case the interpolation grid would be
>> shared so only one thread would pay the cost of computing the tides to
>> full precision and the other treads would use the interpolant (Similar
>> to how tidal effects in frame transformations are handled). IMHO we
>> should investigate if shared force models will save any time or memory.
>>
>>>> Something like:
>>>>
>>>> interface HarmonicsProvider {
>>>>
>>>>     Harmonics onDate(AbsoluteDate date);
>>>>
>>>> }
>>>>
>>>> interface Harmonics {
>>>>
>>>>     double getSnm(int n, int m);
>>>>     double getCnm(int n, int m);
>>>>
>>>> }
>>>>
>>>> Constant Providers could return the same immutable Harmonics each time,
>>>> while time dependent providers could return a new object with the
>>>> precomputed coefficients for that date. Its use would look like:
>>>>
>>>> HarmonicsProvider provider = ...;
>>>> //precompute coefficients for given date
>>>> Harmonics harmonics = provider.onDate(date);
>>>> //use in evaluation loop
>>>> double cnm = harmonics.getCnm(m,n);
>>>>
>>>> Explicit in these interfaces is the assumption that if the user wants
>>>> one coefficient on a particular date then the user will want all
>>>> coefficients on that date. TidesField would still have to use a TSC for
>>>> the interpolation points, but no caching/locking would be needed for the
>>>> evaluation points.
>>>>
>>>> What do you think? I'm open to other ideas too. :)
>>> I like this a lot! It is an elegant and simple solution. I am ashamed
>>> not to have thought about it before. Thanks a lot.
>> Two minds are better than one. :)
>>
>>> Do you want to give it a try yourself or should I do it?
>> I'll take a crack at it. Should I commit this to a separate branch since
>> I think there will be some breaking changes with he 6.0 API?
>>
>>> The current implementation of solid tides is usable and got some
>>> validation, but it is not completely finished yet. It needs some
>>> polishing and adding a few effects. Did you have the opportunity to
>>> chek it against some other reference results yet?
>> I'll see if I can make the comparison. No promises though. :)
>>
>>>
>>> best regards,
>>> Luc
>>>
>>>> Best Regards,
>>>> Evan
>>>>
>>>>
>>>
>>>
>>> ----------------------------------------------------------------
>>> This message was sent using IMP, the Internet Messaging Program.
>>>
>>>
>>
> 
>