[Tfug] Hardware reliability

Bexley Hall bexley401 at yahoo.com
Tue Apr 21 11:55:28 MST 2009


--- On Mon, 4/20/09, Zack Williams <zdwzdw at gmail.com> wrote:

> > I haven't (yet) figured out how to put the database(s) on this
> > project under version control.   :-(  I'm still trying to
> > develop a good mental model of the role they play so I can
> > draw parallels to how that role would *normally* be handled
> > (in a more conventional implementation)
> 
> First part is deciding whether you really need a database.
> There's a kind of strange knee jerk "I have ordered persistent data!
> I need a database!" reaction for any program that do minor data
> processing require a full on SQL database.  But I doubt you're in
> that situation.

Actually, for the types of products that I design (embedded
systems), a formal database is the *exception*, not the rule!
(you typically don't have the resources for something that
extravagant!)

I have a set of three projects in the pipeline which make increasing
use of the notion of a formal database embedded in the product.

I am a *huge* fan of table driven code.  It tends to make the
code more reliable (IMO) and maintainable.  Build a framework,
debug that framework and then use tables to drive that mechanism.

Usually, these tables are embedded *within* the code.  E.g.
const structs of some sort that are traversed by some sort of
"interpreter" (generic-speak).  However, in this case, I am
making the tables more *visible* to facilitate modifications
"in the field".

Also, putting all of the "configuration data" in tables (avoiding
the misnomer of "databases") lets me develop a single tool that
the user can use to configure the products' behavior without 
having to build a special "configuration program".  I.e., the
configuration program is just another "table driven program"
that, itself, modifies *other* tables!

Lastly, I use portions of the database as a whiteboard for
IPC (sort of).  It lets parts of the application (applets?)
share data without using conventional data structs protected
by mutex'es.

All of this, of course, comes with a cost -- in terms of
size and speed.  But, it seems a far better trade-off than
spending resources on silly screen animations, etc.  :>

> If you really need a high performance database, it's best
> to script a dump of the data to an intermediate text-style 
> format (so you can diff it), then check that into version 
> control.

Yes, that's the approach I had been using early on -- when
the data were small.  Now, however, the database has grown
considerably -- and will grow even more! -- so the cost of
doing this is becoming expensive.  I.e., locking the database
for the duration of a "dump" impedes development as portions
of the data are not static.

And, it takes up a metric buttload of space for each snapshot
(even the diff(1)s get big because this is now *text*)

Dunno.

> Otherwise, see about SQLite, storing the data in a CSV file,
> or a Maildir-esque file per record setup.

PostgreSQL is the RDBMS of choice.  Alternate representations
of the data just get *huge*.  E.g., the final *deployed* database
will be just about 1GB.

I think I will just have to live with this as an unfortunate (?)
consequence of this design approach.  :<


      




More information about the tfug mailing list