[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Orekit Developers] SolidTides Performance



I have updated spherical harmonics to the new API (fa31260). The new performance numbers are:

                Old  New
single thread   1.3  1.7 s
many threads   72    5   s

The concurrent performance is about what I expected, but the sequential performance is slightly worse. I'm not sure why, but it is 0.3 seconds spread of 100,000 force model evaluations, which is probably not worth worrying about.

The change does break compatibility with the 6.0 API.

Regards,
Evan

On 10/24/2013 09:31 AM, Evan Ward wrote:
> On 10/23/2013 10:38 AM, MAISONOBE Luc wrote:
>> Evan Ward <evan.ward@nrl.navy.mil> a écrit :
>>
>>> Hi,
>> Hi Evan,
>>
>>> I noticed 7dd5403 made a big improvement (2x!) in performance of
>>> SolidTides.
>> You are always really fast to notice this sort of things!
>>
>>> Since the class was also made thread safe I quickly checked
>>> its performance with multiple threads using it at the same time. For
>>> evaluating the tides field in SolidTidesTest 100000 times I got these
>>> results:
>>>
>>> single thread   1.3 s
>>> many threads   72   s
>> Thanks for checking. This is odd, and should be improved.
>>
>>> I think the big slow down with many threads comes from the layer of
>>> caching and locking implemented in TidesField. If concurrent performance
>>> is important for SolidTides then one solution is to eliminate contention
>>> for that cache by letting the user handle it explicitly.
>> I am not really sure concurrency is really important for solid tides.
>> Having them becoming thread safe was a side effect of setting up the
>> cache. If it is too costly, we can remove it. As this is a force model
>> used within a numerical propagator and numerical propagators are not
>> thead-safe (and will almost certainly never be), it is not a big deal
>> to remove it.
> I think there could be some benefit to having thread safe force models.
> Then some of the data structures could be shared between several
> sequential propagators. In this case the interpolation grid would be
> shared so only one thread would pay the cost of computing the tides to
> full precision and the other treads would use the interpolant (Similar
> to how tidal effects in frame transformations are handled). IMHO we
> should investigate if shared force models will save any time or memory.
>
>>> Something like:
>>>
>>> interface HarmonicsProvider {
>>>
>>>     Harmonics onDate(AbsoluteDate date);
>>>
>>> }
>>>
>>> interface Harmonics {
>>>
>>>     double getSnm(int n, int m);
>>>     double getCnm(int n, int m);
>>>
>>> }
>>>
>>> Constant Providers could return the same immutable Harmonics each time,
>>> while time dependent providers could return a new object with the
>>> precomputed coefficients for that date. Its use would look like:
>>>
>>> HarmonicsProvider provider = ...;
>>> //precompute coefficients for given date
>>> Harmonics harmonics = provider.onDate(date);
>>> //use in evaluation loop
>>> double cnm = harmonics.getCnm(m,n);
>>>
>>> Explicit in these interfaces is the assumption that if the user wants
>>> one coefficient on a particular date then the user will want all
>>> coefficients on that date. TidesField would still have to use a TSC for
>>> the interpolation points, but no caching/locking would be needed for the
>>> evaluation points.
>>>
>>> What do you think? I'm open to other ideas too. :)
>> I like this a lot! It is an elegant and simple solution. I am ashamed
>> not to have thought about it before. Thanks a lot.
> Two minds are better than one. :)
>
>> Do you want to give it a try yourself or should I do it?
> I'll take a crack at it. Should I commit this to a separate branch since
> I think there will be some breaking changes with he 6.0 API?
>
>> The current implementation of solid tides is usable and got some
>> validation, but it is not completely finished yet. It needs some
>> polishing and adding a few effects. Did you have the opportunity to
>> chek it against some other reference results yet?
> I'll see if I can make the comparison. No promises though. :)
>
>>
>> best regards,
>> Luc
>>
>>> Best Regards,
>>> Evan
>>>
>>>
>>
>>
>> ----------------------------------------------------------------
>> This message was sent using IMP, the Internet Messaging Program.
>>
>>
>


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature