[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Orekit Developers] [SOCIS 2011] improved specifications



On Thu, Aug 4, 2011 at 12:05 PM, Luc Maisonobe <Luc.Maisonobe@c-s.fr> wrote:
> Hi Alexis,

Hi,

> Here are some improved specifications for the android application. It is
> a work in progress and we ask you to review them, help defining the
> right choices and organize them in a few work packages, hopefully with
> some estimated schedule.

Thanks :)

I'm currently thinking on how I can architect the whole thing without
duplicating code and making sure it will work both on tablets and
phones (which is what I understood you want), and how I can make the
UI.

I may have some troubles to test things on the tablet emulator
currently since the Android 3.0 emulator runs veryyyyyy sloooooow on
my computer and almost unusable, but I will try to get one when I
receive the first payment (even if I sell it after the SOCIS). But in
the meantime I can still work on the smartphone version and design it
to make it adaptable for tablets, it will be only some minor code to
write :)

> From a functional point of view, the application should allow the user
> to select between a few separate functions:
>  - orbits conversions
>     (Keplerian, circular, equinoctial, Cartesian)
>  - frames conversions
>     (position/velocity, with Orekit predefined frames)
>  - dates conversions
>     (Orekit predefined time scales)
>  - events detection
>     (ground stations, eclipses, apogee/perigee, node ...)
>  - impulse maneuver computation
>     (velocity increment in any frame, including local orbital frames)
>  - orbit visualizing
>     (2D on a map, with ground track and sensors footprints)
>  - attitude visualizing
>     (3D-wireframe unit sphere with axes, planes and angles toggling)

That seems ok for me, except maybe the attitude visualizing part
because I never used OpenGL or did 3D on Android, and orbit
visualizing because I didn't do any custom 2D drawing too on Android
but if I have spare time at the end I could try :) (and I know where
to find the resources to learn)

I think I would group this in :

1. Frames + Dates conversions (in order to put the basics of the
Android app -- I've explained how I planned to architect it in the end
of the mail)
2. Orbit conversion + Impulse maneuver computation + events detection
3. Adapt it on tablets
4. Other tasks (like all the dataset management and so on)
5. If they apply, running time optimizations tasks would be nice to do
here (as I put this in my original proposition, it would not be nice
to forget this :) )
6. Orbit visualizing + Attitude visualizing

I'm totally certain that if I put a time schedule, it would be wrong
:) But I think I will be able to finish the 1st task in 2-3 days, and
I think the second task could take a week. The 3rd task could take 1
to 2 days. I don't currently know about the others.

> From a data point of view, the application should be able to use updated
> data sets (copying files to SD-card is fine). For first users setup,
> having a default initial set of data installed when the application is
> installed is mandatory.

So you mean that you prefer to embed a dataset in the final .apk and
let the user select a custom apk if he wants. Ok :)

> If some functions can process TLE data, then
> downloading TLE from the net should be possible. User should also be
> able to store some data they use frequently (typically station
> coordinates or orbits) and select them from drop lists. Users should be
> able to manually edit the parameters and store them for later reuse.
> When setting up an orbit, users should be able to use any supported type
> (i.e. they can input an orbit in Keplerian, circular, equinoctial or
> Cartesian parameters at will). For ground point definition, if the
> device supports it the current rough location should be available for use.

That's ok. I thought about how I could make the parameters forms
without duplicating code and so on and that's what I currently aim to
do :

In Android if you want to open a window you don't instanciate the
class and make instance.show(). You tell that this class (which
extends Activity) will be in charge of an Intent. An Intent is
basically a query, you can add parameters, and even Serializable
objects, and it can return a value.

So if you want to open the window, you tell the Android API you want
somebody to answer the intent "org.orekit.SELECT_FRAME" (you can force
the class too if you want) and the Android system will browse all the
installed application to see if there is a program which accepts this
intent, instanciates the class, give it the query by a RPC mechanism,
and return the result using RPCs. That method is very powerful because
a user can build an app listening for "org.orekit.SELECT_FRAME" and
build its own frame selector and the Android system will show a dialog
box asking the user which one he wants. That's how you can change a
lot of parts of the Android platform with your custom program.

The idea is to port the idea of what has been done in the testapp with
promptFrame() and promptDate() and make it separate Activities in the
Android app, listening to "org.orekit.SELECT_FRAME" for instance. This
would launch a window where the user can select a frame, this window
will be in charge of reading saved settings and so on, and this window
will return the Frame (as Frame implements serializable) through the
Android RPC system to the original window. Doing so with the other
part of the settings which reduce the amount of code duplicated
between the different part of the app.

So, when these promptFrame, promptDate, ...-like activities will be
done, it will be very easy to enable the user to choose a parameter.

For instance a promptTLE-like activity could be in charge of
downloading TLE from the Internet, let the user choose and so on, and
would be called by a promptOrbit-like activity which would be called
by the parameter window the event definition for instance, etc...

It would be the same idea to pass the data from the parameters
activity to the result/computation activity (by embedding the
parameters in the intent) and so on. I also figured out a lot of parts
of how to build the app according to your list. That means that if you
agree on what I say, I just need to rapidly sketch the UI on paper
(and I'd finish this by the end of the day) just to see if I didn't
forgot anything and if everything is ok I can start the actual Android
development tomorrow :)

>
> best regards,
> Luc and Pascal
>

By the way, I posted the tutorial you asked me yesterday on the wiki :
https://www.orekit.org/forge/projects/socis-2011/wiki/HowToBuildToAndroid
I hope this is what you wanted :)

Alexis