[Tfug] RDBMS reprise

Judd Pickell pickell at gmail.com
Tue Jan 22 10:26:05 MST 2008


If I may offer a thought, and I may have missed something in reading
your requirements, but here goes:

What if you have the cursor create a temporary table to to store the
query data. There are probably a number of ways to do it, and a stored
procedure would probably work best. As far as I know a compiled stored
procedure is the fastest way to execute a query on the server so even
if it was a large data set, it could be done quickly so minimizing
server usage. I am not entirely clear if the temp table is stored in
memory or on disk. I guess that would depend on which DBMS you are
using.

Basically with the temp table, it will be destroyed at the end of the
client session so minimal server resource usage, static content, then
you could implement the bite at a time approach, spreading the
resources across all components (client and server).

If this has been brought up before, I apologize.

Sincerely,
Judd Pickell


On Jan 22, 2008 9:54 AM, Bexley Hall <bexley401 at yahoo.com> wrote:
> Argh!  Sorry to revisit this issue but I keep
> thinking myself in circles... :<
>
> Recall I am trying to work with a (severely)
> resource-starved client and databases of
> potentially large size.  The problem is
> handling queries which may return a large
> number of results.
>
> There are several strategies -- each with a
> different set of tradeoffs/problems:
>
> 1* Client issues query and pulls down "all"
> results.
>
> This is the *ideal* solution as the snapshot
> it provides of the data is known to be self
> consistent (i.e., query is atomic operation).
>
> The server-side resource requirments are
> transient -- the data is not duplicated.  But,
> the resource requirements on the client are
> boundless.
>
>
> 2* Client issues query and uses server-side
> cursor to step through the results.
>
> This shifts the resource load to the server.
> It runs the risk of requiring the entire
> dataset to be duplicated on the server
> (e.g., "SELECT *").  However, the results
> are always self-consistent.
>
> Client-side resource requirements are at
> their minimum, here (?).  However, some
> additional overhead is required to manipulate
> the cursor.
>
>
> 3* Client issues query and pulls down some
> number of results each time -- reissuing
> the query until all results have been seen.
>
> Server-side resources are reduced by taking
> smaller "bites" instead of an entire snapshot.
>
> This bounds client-side resource requirements
> at the expense of some overhead in accessing
> the dataset.
>
> But, the data may contain inconsistencies as
> the dataset -- as a whole -- is not accessed
> atomicly (i.e., it can change between "bites").
>
>
> 4* Client locks the table(s) on the server
> and accesses some number of results at a time.
>
> This bounds server- and client-side resources
> at the expense of locking the data for an
> indeterminate length of time.
>
> Inconsistencies are not a problem as the
> query is effectively atomic (since dataset
> is locked for the duration).
>
>
>
> 1* is out -- it won't work in my environment
> due to resource limitations on the client.
>
> 3* seems to be the worst of both worlds -- it
> is harder to implement and requires concious
> handling of potential inconsistencies in the
> data (in a way which depends on the nature of
> the *specific* data -- i.e., inconsistencies
> in one set of query results may need to be
> handled differently than inconsistencies in
> a different query's results... you can't
> encapsulate/standardize this handling!)
>
> 4* is downright scary -- unless the compute-time
> of the client can be bounded and *guaranteed*,
> it seems like it is just one giant deadlock
> waiting to happen!  (since other applications
> will be accessing some or all of the dataset
> concurrently).
>
>
> (sigh) This sucks!  But, I guess it is the only
> practical solution to my situation (2*)?  Have
> I missed any other approaches?
>
> *If* I adopt this approach, what do I have to
> watch out for?
>
> No doubt I have to balance the resources available
> on the server against the possible concurrent
> demands that can be made on it.  Clients will
> have to treat query results as precious and
> dispose of them as soon as practical (since holding
> onto them ties up resources on the server that
> impacts other clients).
>
> Are there any tricks I can use to hint to the
> server that it can *begin* reusing these resources?
> E.g., if I always walk the cursor through the
> result set from top to bottom, can the server
> exploit this fact to release the resources used
> by result records that I have already examined?
>
> Thx,
> --don
>
>
>       ____________________________________________________________________________________
> Be a better friend, newshound, and
> know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
>
>
> _______________________________________________
> Tucson Free Unix Group - tfug at tfug.org
> Subscription Options:
> http://www.tfug.org/mailman/listinfo/tfug_tfug.org
>




More information about the tfug mailing list