Wiki Home

Tune Fox

Namespace: WIN_COM_API
Below some tuning aspects at the technical level, applicable to Fox in general, or to VFP where indicated.
Please feel free to have comments, benifit from it, or doubt whether things are true.
Only those things are mentioned which may be not known to everyone or which are in "the book" already.
The base is native tables, so not all may be applicable for Oracle, SQLServer etc.

It's all based on experience, and the necessity in order to operate a Big System;
Please note that the description in Big System doesn't satisfy my below-subjects too much, and therefore I'll put a smal definition myself; whether these are true or not is not important, but now my tuning-params below are consistent with the definition on "when need this" :

"A Big System is a system where the number of objects, programs, tables, and the number of users is that high, that all the processes ocurring in this system -and which may be implied by a large numer of transactions- may expect to create performance-problems. All together the perfomance of the system is highly influenced by the application and the use of its resources, and the place they are held; usually the influence of the user's actions are out of our control, though they too can be influenced more or less by normalizing the pocesses performed (= system functionalities) to the proper places (= proper users), therefore eliminating redundancy and thus using fewer system-resources overal.".

Note :
It is not a good idea to apply below params only to Big Systems; just apply them always, giving you lot's of room for possibly needed overhead (shell's etc.) in future !

For starters, never, just never put any sneaky high-performance stuff in any "Functional Program"; put it in systems-programs. IOW : should you change your mind later (or should MS change is mind !) allow yourself to make a needed amendment only once !

Keep in mind that any factor of 2 eliminated I/O etc. short-thinking allows for 2 times more users. This may be far from true, where a system may trash at 100 users giving now response at all, while 50 users won't have any problem.

Use one physical disk for dbf and one for cdx. Difference : hard to calculate but for sure not only 2; think in terms of a factor 100.

Use Duplication of your two disks; for Read-I/O again a factor 100 maybe gained. Don't use Mirroring (Novell), which helps nothing on performance.
Duplication can be used in both NT and Novell; When properly insalled, a copy from server to PC will lit disk-lights of the two duplicated disks one by one; when wrongly insalled, both lights will lit at the same time.

Never use Raid-5 (thoug everyone advises it); it 'll only slow things down. See / use 4 instead.

Use local tables whenever it is possible c.q. legitimate; all I/O not being on the network is a gain.

Never use the C:-drive of the own PC which is allowed in Citrix (not WTS); think of where the data goes, which in this case will be the slow wire.

Put the Fox EXE (DLL) on the local PC, no matter what (diificult for maintenance); keep in mind that even cerrtain keypresses acces the overlays, which are not cached !

Never think : well, this is a memory-loop only, nobody on the network will notice;
Once you upraded the user-task to Citrix (etc.) you will slow down all other Citrix-users by this beautiiful no-harm action.

Try to avoid Dos-editors (like EDIT) as much as possbible. This is the same as 9, were NT (where Citrix/WTS run) can't cope with it, giving 100 % CPU-usage. Applies for both NT4 and NT2000.
Or use Tame DOS.

Use the smallest available blocksize for the dbf-volume (or disk); NT goes smaller than Novell -> NT can perform better than Novell. Keep in mind that you functionally access records, whereas the server accesses blocks, and a record usually has a length of 100 bytes or smaller.
A good guess for the blocksize of the cdx disk/volume is 32 KB (don't go smaller, but higher maybe allright, a bit depending on what the app is doing.

When still in need of capacity, use two servers, one containing a disk for dbf and the other a disk for cdx; Duplicate these again, and you will be able to support a 1,000 users on one logical system, consisting of two servers.

When your prg's are not bound in one EXE, APP, Procedure etc., you may have several thousands of files in one directory; don't forget about all which results in (many) files. Once you have this amount in one directory, Novell 4 or less will have CPU-difficulties, knowing that the dir-entries are in memory (when Set Number Of Directory Cache Blocks is set high enough), but files are found by the server by searching for them in a sequential manner. The same acounts for the PC itself when running W95, W98. NT (4, 2000) has all alphabetically sorted, for Workstation and Server.
Wo you know what to choose now.

Though (13) everyone will say "okay, just build the one EXE for all of it", and indeed this is the most logical and structured path to follow, you will find yourself in trouble needing to amend one program (of whatever), resulting in the build of all again, and sending all over the wire again.
Though not a performance-topic by itself, this indeed leeds to having all separated, and then having the performance-topic at hand.
Keep in mind that the build of a several 1000-file app will never stop running, in order to find all the references.

Never perform a File() - or related commands, with first a Set Path To (nothing); In the DE (Development Environment) Set Development will be On, and Fox will look for the latest version of the prg, app, exe and all, and will do this in all directories in the path. Thus, even when the file needed is in de current dir.

When the app is distributed in the single programs (so no big EXE etc.), rename all FXP's to EXE's. This just works fine (in all Fox-versions), and stops Fox for looking to newer versions in the runtime environment (yes, it does something there too).
Keep in mind that several Virus-checkers may become active on your EXE's which just aren't, so tell the user not to check on your program-dir (anyone heard of a virus in a normal (FXP) compiled program ?

When having lots of BMP's (etc.) in VFP, use VFP6 rather than VFP5; improvement may be a factor 2 because of the enhanced resource-cacheing in VFP6.

Allow yourself to get fired if you use Locate For without a While and a Seek first; Never read records yuo won't use but one (which is unavoidable).

When Deleting a record, replace it with a key that gets the record at the bottom of the table first. Keep in mind that this allows for thousands (or millions if you like) of deleted records, not encountering them never anymore performing a Skup (-)1. Of course this implies that allways an index should be active when acessing the table.
I quote that since the availablility of the For-clause, this just as well can be realized via Index On .... For not Deleted(), but also I also note on not having the real experience on this. The For-clause is used all over the place without problems, but one neer knows ...
Please bear in mind that where we used the technique of the replace with an end-of-table key, this requirers much system-programming, since Go Top and Go Bottom is not allowed too, where this leaves you at not-wanted records (I don't work this out here, but it is true); Whit the For-clause as mentioned, this should allow the normal Go Top / Bottom.

Use Set Refresh to 60,60;
Where I stated that only the no-so-normal params would be mentioned in this list, this one seems not to fit in here. However, expecting that everyone (or some) just put this param to 1,1 in order to have the data as actual as possible, this for 100 % is rubbish, as long as you follow the other rules around the setting to 60,60;
Without explaining it all now, the general rule of Locking records properly, is sufficient to get the actuality-result.
Please note that e.g. a Sum on a 1,000,000 table maybe several hundreds times faster, depending on the slowness of the other resources outside the PC (the slower they are, the better the improvement).
Besides, allow yourself to switch off the client-params on this where needed, because Fox will do it anyhow (but : I am not 100 % sure anymore nowadays).

Never use stuff as EMS anymore; it will not only slow down, but will even give memory-problems.

Never use FLUSH; it will slow things down, gets the logical transaction out of control, and should be removed from Fox. Personally I really don't understand why it is still supported, only causing problems at the logical level.

Never use Clear Program whereever in code and for whatever reason (??); it reloads all on the program stack.

Set the Progwork and Tempfiles params in the Config.fxx to the local disk were possible. Note that the Reindex of a table now uses the local disk, where it must fit. So if not, too bad, en point to the network drive again.

When the app is heavily used by the one on his/her PC, performance will degrade during the time of use when using W95,W98/NT4 because of the poor memory management, and Win thinking you need more and more memory, resulting in bigger and bigger Swapfile, Swapped in (and thus out) more and more.
A product like Ramshield helps sufficiently, but takes (too much) time to install.
Use W2000, or advise the user to reboot the PC regularly.

When the PC's processor isn't that much, switch to as less colors as possible; a good idea for Citrix (etc.) use anyway.

For Citrix (etc.) and VFP : use MouseMove-methods as few as possible, and for sure not on larger area's.
Never use Paint-methods.

When you calculated to need 10 GB of data, advise the user to by 80 GB disks.
When you need 2 GB, still by te 80 GB disk;
Keep in mind that the slowest thing in the disk-subsystem is the head-movement, expressed by the average accesstime, which for many years (over 10) is around 8 ms. This average exists for the movment of he diskhead over half of the disk-surface. That is, the used surface. Thus, when using 10 GB on a 80 GB disk, you may roughly say that the performance on the disk subsystem is 8 times better : the average of 8 ms becomes 1 ms. Using 2 GB on a 80 GB disk, it will be 0,2 ms and you gained a factor 40.

With processor speeds of today in server as well as client, and when all the other params here mentioned are satisfied, the network speed on a 100 Mbit network will be the bottleneck for 100 % sure;
Upgrading such a network is not realistic (yet), but keep in mind that when 10 Mb is used, you gain a factor 10 in uprading to 100 Mb, which just is realistic.

Derived from 29, one should keep in mind that a network (and all resources) have a capacity, which by itself is derived from speed;
In fact 29 says that you as the only user on the network, are able to encounter the network as a bottleneck, beause it is the slowest part of the chain (when 28 is solved sufficiently, but also knowing of the Server's cache avoiding that bottleneck at all). Know note that when you as this one user are waiting for the transport-time on the network, another user immedeately notices that you are around too, once the bandwidth of the network is used. Thus, the faster you are gone, the quicker another will have access. Once you are not away fast enough, there will be a colision (Ethernet, not Token Ring), and know you really are in trouble, because your access to the network has there to be another time, creating even more traffic. Your network will trash ...
There should be only one conclusion here : try to avoid everything which uses up resources, because it will trash otherwise, and everything collapses.
Finding the proper balance in all resources must be seen as a too hard job, where every other month something becomes faster, and the botleneck may be elsewhere again. A sort of luckily we are in the stadium of the network being the bottleneck for a while, so we know what to do : avoid all unnecessary I/O's.
Keep in mind that a client doing too many memory-operations will not let trash this client and keeps the bottleneck away from the network-stuff. Thus, where this looks like the solution (the user getting a steady response, though not optimal), this doesn't work anymore when working on Citrix (etc.).
Yes, I know this sounds all very stupid, but enforcing a client to stop working for 0,2 secs every sec may prevent the network-resources from trashing. And once trashing is there, all is lost.
So by optimizing everything, the trashing of certain resources are encouraged.
Last note on this :
Our first ERP-implementations back in 1990, ran on 2 386/33Mhz servers and supported 50 users. The processor utilization (Novell) hardly came over 10 % at any time;
Users had of course not the responses of today, but it was acceptable. Now bear in mind that the client does much, much more than the Server, and that the client's processor of 33 Mhz back then, wasn't able to feed all the network-resources with too much data at any one time. How different is this today, where the client is 30 times faster, and thus responding to the server 30 times faster. And yes, the server is 30 times faster too, but some resources for sure are not, such as the disk.
However, if all params are set / performed as mentioned, the only thing you will notice, is the speed of the network as the bottleneck, and not the capacity. Thus, a network of 100 Mbit is 10 times faster than 10 Mbit but has the limit on this 100 Mbit. Because you are 10 times faster away from the network, 10 times more users can access it with the same data. The latter is allways sufficient for a Big System as implied by me, having 100 users on this one network-segment. But each user will find him-/herself waiting on the network.

Where a Database Server (Oracle, SQLServer, etc.) maybe where invented to avoid network I/O, try to think of this reason as rubbish, where you just should avoid unnecessary I/O's at all; once you have achieved this, any SQL-call imply just tonnes of overhead. When you have the unnecessary I/O's after all, you end uip with and the overhead and the server's cpu at 100 % or it's disk trashing (eather is as bad). My personal conclusion on this one : only perform the I/O's necessary and use native dfb's. That is, for the response argument. But of course there's more to it...

Never think that Citrix (etc.) is the solution for a non-remote operation;
Where we all have 600 Mhz+ PC's on the desk, the Citrix server is a same 600 Mhz+ PC. Okay, put 4 processors in there, a nice disk subsystem, and you have 4 of these PC's. But, put 40 users on there, and they use virtually a 60 Mhz PC if they press buttons the same time. Of course they don't press at the same time, but the message is clear. A Teminal Server is for remote purposes and not for maintenance - or cost purposes.

From most of the above is implied that the disk-subsystem of Citrix (etc.) must be super too. Note on this that where all processes are normalized to their proper location, and where all I/O applicable for local handling is handled locally, this will be on Citrix' disk(s); because al virtual disks of the users will be on separate physical locations on these disks, the disk-head will swap heavily. For this matter, the virtual disks should be kept as small as possible.

When all of the above is well-implemented, expect the following of your Big System :

Suppose there are 100 users with 600 Mhz clients on this one 600 Mhz (one cpu) server, performing 1,000,000 elementary logical transactions a day, the processor's utilization will not be over 3 %, the disk-lights litting smoothly once per few seconds or so because of postponed writing and reading most out of the cache which couldn't be retrieved from the client's cache. The memory of the server for Novell depends on the needs for cache blocks depending of the size of the disk (can't be helped). But suppose 8 GB of disks are mounted, 128 MB will give "immedeate" response to any user. Yes, the more memory the bigger the cache will be, but which does not guaranty a better response forever; the more cache-operations, the more the processor is used, in the end bottlenecking all.
100 users may not be the qualification of a Big System, but 1,000,000 transactions is; Please note that there is so much over-capacity on this configuration, that I am pretty sure it can cope with a factor 10 more users (and transactions), still giving good response.

Though some - or all of you may think of this as impossible, please read the above again, and add up the factors mentioned here and there. A "here" of 100 and a "there" of 40, gives 4,000 already ! Of course you will have some of the params already implemented, but when only one of "factor 40" is not, your system can cope with 40 times more users (transactions) as you thought. Also note that many of the params cannot be quantified (easily), but they all contribute to a better performance.

I know by expercience that many of my statements will be questioned on the trueness of it; f.e. the Raid-5 vs. Duplication allways takes half an hour to explain by telephone, and therefore not all can be explained in short in this writing.
Of course I am very willing to be wrong at any topic, so please feel free to have your comments on them so I will learn too, where IMO learning in this world will never end. Adding new topics, yes please. Think of the objective : benefit all of any system with super performance no one held for possible, giving you the opportunity of selling against some big brothers, but were the latter need 100 x 40 x etc. bigger hardware in order to support the needed number of users. For you, this is easy selling ...

I'm sorry not having the experience on Database Server systems (in Fox), knowing that most of the response-sadness is occurring overthere. Maybe in a year or so.

-- Peter Stordiau
Contributors Peter Stordiau
Category Big System Category Performance
( Topic last updated: 2004.01.15 05:51:56 AM )