Category Needs Refactoring (Certainly does!!)
Isn't this what VFP8 gives us in the Cursor Adapter ?
(or : how to get in trouble)
Peter - so, you want to create an API that will take xBase commands and perform them on SQL tables? Is this the thrust of your idea here?
VFP already does this, with views. You can create a view and treat it just like an xBase table, then update it to the back end?
Generally ok, however, where does the view come from, and writing in the end is not a problem (DB-writes go through a general function).
You define the view using the view designer or the CREATE SQL VIEW command. All a view really is, is a SQL statement that is stored in the DBC, with some info one the fields in the view for updating purposes. Once the view is created you USE it just like a table. You can also index it if you want to use seek.
However, my recomendation when moving to SQL is to do any large data manipulation in native SQL (ie Stored Procedures) where you get the best performance you possible can. This is easily scalable by creating an SQL Server farm also. You would be surprised at what SQL Server 2000 can do!
Confused ... Looks like the objective for writing and in general ok. But where come the reads from ? Can any (former - and now other command) Seek be catched by a Stored Procedure ? And (sorry I don't know) can this be done without a DBC (too many tables) ?
You can create a c/s project without a DBC. But, if you do use a DBC that would be the only table. As far as stored procedures, lets say you have a routine in your application that posts orders to customer accounts. You would write that code into a SQL Stored procedure, and the client would just call the stored procedure, rather than getting 100's of records to the client, then processing them, then saving them back.
Got it ! Well, I think. So your thrust is to fool the app by re-forming all the former normal tables to be Views i.e. a substract of the remote SQL-tables; if this is not exactly your idea, I'll adopt something like this anyway. The task for me do to know, seems to get hold of the actual data in the View, and where the View must dynamically load those records the app asks for. And this at framework-level. This seems to lead to combining this View with a Result Cursor somehow (or the latter can do the job by itself ?). Yes, I like this approach as a basis very much, which may even lead to an easy differentiation between SQL-based tables and native dbf-tables at the higher level (i.e. only the tables needed for whatever reason will be SQL, technically enforced by some meta-data). Thanks very much Lauren.
Having (as you know) a rather big mouth (or whatever caused it) we've got the assigment for the processing of 220,000 Sales Order Lines PER DAY (are they crazy ?).
All processing is automated and initiated by EDI (the Orders) and outputs in EDI (Invoices) as well. "In between" is our ERP and f.e. Purchase Orders are part of the process too. In fact this can be seen as a back-end for a number of supplier-customer relations, which facilitates the admi's of the suppliers (yep, this is where this world is going to).
During the first quotation there was talked about 44,000 Sales Order Lines per day, and a brief calculation from my hand came to my statement : ok, you have 17.8 weeks to go, and then we have the 2 GB limit for the largest table used. "But I will find a way around this, splitting up this table in 'other' normalized pieces so it lasts longer".
Functionally the customer of my facilitating-customer found the idea so good, that he extended the assignment to my customer to the 220,000 Lines.
My solution : "ok, I get the app working in SQL (ie SQLServer, Oracle) before the end of this year (2001)".
And now I am in trouble ... Haha.
Yes, our app still uses native dbf for the database, and where I planned to convert (better : refactor) somewhere in the next coming years, now it all has to be performed somewhat faster. Ok, too fast in order to get it done in time.
Background to all of it, is that the 220,000 -of what I call- elementary logical transactions (ELT) imply a factor of this of other ELT's thinking of the to be generated Invoice Lines as the best understandable example. How many there will be I didn't calculate on yet, but a few million comes first. BTW this few million of ELT's will lead to again another factor of written records (new and/or changed), and I think of 10M at least per day.
One of the "logical" problems here is the nature of the process, which leads to the handling of all on one PC (because of the EDI-initiating), but which can be spread over more PC's. However, apart from the 2GB-problem, I expect some needed-time problems, and may need 25 hours in the 24 hour day.
Now where I have numerous ideas of how to refactor the app into SQL-based, in fact they all won't fit here, due to the lack of time to do it in (having only half a year). To make this clear : one of the ways is re-generating all of the code, where Seek's become SQL Select's (and whatever), but which is a hell of a job because the logic in programs (loop's etc.) need to change too; where I expect to succeed on this after all, I don't expect to let this happen within the given time.
For this moment I think of the following solution :
All the DB-commands (like Seek etc.) must be converted to xSeek's being a function in any language (C++), and containing all the (extended) stuff to present the results to the functional code, leaving that with the logic as how it is now.
Now note that the reason to create this function in another language, is because of the needed speed to call this function. I mean, DOing a program (or invoking a normal UDF) IMO will be too slow, implying a approx. 0,04 sec (600 Mhz) overhead for the load and re-load. So, creating "our own fox-commands" is the way to do it, giving theoretically the same overhead as with any fox-command. However :
Many years ago we tried this (C++) in FPDos (!), leaving us with a huge overhead i.e. many factors slower than the DO of a normal program (or UDF). This, while we just applied the formal way to create the "custom commands".
Now the question :
Can anyone help me with the creating of custom commands which operate at normal command-speed, using the techniques of today in the VFP-environment ?
I must say that we are not familiair (anymore) with all of the available things on this, and right at the moment don't have the applicable compilers (of today).
If I remember it well, before we created API's, which IMO should be the formal way to do it.
Of course the solution may be "any", as long as the single native DB-command for native dbf can be automatically converted to another command (may be more commands if necessary), as long as I am able to create my own code somewhere in there. I mean, that using the now avaliable API's for SQL (?) IMO won't lead to the solution needed, because I cannot have my own code in there. For this matter please think of my ideas (for now !) :
This code should be able to present any result-cursor from SQL as a normally-treated table in the further logic of the functional programs;
I know, this may sound "too far" (it does for me too ...), but in fact we already use this exact technique in the environment of our Grids, where the Control Source is a table derived from the tables with the original data, and the functional programs looping through the original tables in the end present the data in a normal VFP-grid, without ever changing this functional program. So guys, this is just possible, but however only a small part of the solution I need here.
Since -whatever the solution will be- it will (must) be highly generic, I plan to make the solution available as shareware; when it is really working MS could make it available as a high-level API-set (or even normal commands);
In the end we all have any SQL-based DBMS connected to any app written for native dbf, which some of us may benefit from.
Anyhow, I (thus we) are not there yet, so I would appreciate any help on this very much.
-- Peter Stordiau
I have an idea, no clue how to implement it.
Start with an empty cursor, kind of like USE V_foo NODATA.
xBase commands issued against that cursor are 'converted' to SQL commands that will fulfill the xBase command. There are really only a few categories: Add, Update, Delete, move the record pointer. Add, Update, Delete are pretty much covered now. Move the Record pointer... what exactly do we need for that? If we exclude the RECNO() function from the list of 'supported' commands/functions. So LOCATE FOR cFid1="bar" gets this sent: select top 1 * from foo where cFid1='bar' order by 1 - the resulting record is put in the view, the record pointer is on it, done. SEEK() would use the current index expression and create a comparable 'locate'. Skip (both the command and the browse) present a roadblock that I haven't figured out how to get around. At this point I realize that it isn't going to happen, and no one will pay me for it anyway.
( Topic last updated: 2003.07.29 08:27:31 AM )