Wiki Home

VFP Misused And Abused

Namespace: VB

This page is for the discussion of VFP commands, functions, and programming approaches that are often misused or abused. It is not intended as a depreciation of anything, but rather as a list of some of the places in VFP where pitfalls lie. It is a list of the commands, functions, and features of VFP that can cause problems when used incorrectly and that are often used incorrectly.

See Also VFP rookie mistakes
  • Very often SET FILTER is used where a parameterized view or SQL SELECT command would be better. Using SQL also buys you further SELECTs on the filtered result, _TALLY to verify the count, etc.
  • You can get into trouble if you filter on an expression with a variable that goes out of scope before the table gets closed or the filter reset
  • It can be extremely slow if you filter a big table on an expression that is not Rushmore optimized.
  • Be aware that the filter does not take effect until the record pointer is moved. If the current record does not meet the filter condition when the SET FILTER command is executed, the record pointer does not automatically change. You should force a record pointer movement via GO TOP or some other command after SET FILTER. Be aware that GO TOP is not Rushmore optimizable. Your best bet would be to use LOCATE instead.
  • Grids used to display SET FILTER results can be terribly slow. Using SQL SELECTs or Views is often a better choice. Consider SET KEY instead of SET FILTER. Since VFP 9 the grid has a property "optimize" for Rushmore optimization. Using SQL SELECTs or Views is IMHO still often a better choice
  • The SQL commands of SELECT, UPDATE, INSERT, and DELETE executed on a cursor created with USE or SELECT will ignore any FILTER placed on that cursor. In other words, the SQL command's WHERE clause overrides the FILTER. However, SQL commands do respect the SET DELETED setting.

Use SET FILTER on small sets, where no downstream sub-selects are needed, and where the filter condition does not go out of scope before the SET FILTER TO reset or before the table is closed.

Filters CAN be used on large result sets as long as you don't eliminate a large number of records and your
filter conditions are Rushmore optimizable.
You can specify a Cursor created by the USE command as the "Target" of the SQL statements INSERT, UPDATE, DELETE or SELECT. Keep in mind, though, that the SQL Command looks "through" the Cursor to the underlying Table or View and may update ANOTHER Cursor based on that same Table or View in the same data session.

  • If the target Cursor is based on a VIEW that has a WHERE clause, the WHERE clause is respected by the SQL statements.
  • If the target Cursor has a FILTER created by SET FILTER TO, the filter is ignored by SQL statements.
  • SQL statements respect the SET DELETED setting.
  • After the execution of a INSERT, DELETE OR UPDATE statement, the selected Alias will be the one that was used by the SQL statement. If there is more than one Cursor open USE-ing the same Table or View, you must test using ALIAS() or SELECT() to determine which was used.
  • In situations where multiple cursors have the same underlying Table or View, INSERT, UPDATE, DELETE respects ONLY the buffering setting the Cursor it actually uses - which cannot be determined in advance. If Buffering is enabled on the Cursor actually used, the other Cursors will NOT reflect the result of the SQL command until a TABLEUPDATE() is issued on the Cursor.
  • If the INSERT, UPDATE, or DELETE command specifies a View or Table (not a cursor) as its target, and if no existing Cursor USEs that underlying Table or View, then a Cursor is created with the same name as the Table or View. The changes are Committed to both the Cursor and the underlying Table(s). Exception: If the Target is a View and the SQL command is the INSERT of a single row, then the change will need a TABLEUPDATE() to be committed.
  • None of this behavior applies to Cursors created by SELECT statements. A SELECT statement creates a cursor that is not tied to any underlying Table or View. The result of a SELECT command is a Cursor based on an independent, temporary table.

Example 1:
Use Contacts Alias c1
Set Filter To !firstname=="Joe"
Select * FROM c1 WHERE firstname=="Joe"

Result: The SELECT statement ignores the FILTER and returns all records with first name Joe

Example 2:
Use Contacts Alias c1 In 1
Use Contacts Alias c2 In 2 Again
Update c2 Set firstname="Joseph" WHERE firstname=="Joe"

Result: The Contacts table and both cursors are updated to make Joe into Joseph. The current Alias is now c1

Example 3:
Use Contacts Alias c1 In 1
CursorSetProp("Buffering", 5, "c1")
Use Contacts Alias c2 In 2 Again
Update c2 Set firstname="Joseph" WHERE firstname=="Joe"

Result: The Cursor c1 is updated to make Joe into Joseph. The underlying table and cursor c2 are NOT updated until the TABLEUPDATE() function is executed on c1. The current Alias is now c1

You can get into trouble thinking that the cursor you created is the real source OR target of the SQL commands. There are ways to avoid problems with this behavior:
  1. For each Table or View in a Datasession, only have one Cursor created by a USE command, create the rest using SELECT statements.
  2. If you need multiple, buffered Cursors of the same Table or View, try to put them in separate Datasessions.
  3. If #2 doesn't fit and you can't use multiple Datasessions, then remember to TABLEUPDATE() the Cursor you change and REQUERY() the other Cursors based on Views that include the target Table.
  4. Directly update Cursors with a REPLACE, APPEND or DELETE statement instead of SQL statements.

Ben Creighton
When accessing SQL Server tables through ODBC, it is important to use the "SQL Server Native Client 10.0" driver. With the Date, Blob, Memo-Binary, and Varbinary data types in VFP, you will need to use this driver to avoid translation errors when using SQL commands on remote views over an ODBC connection. Earlier drivers will not work.
  • If you create a Connection object in the VFP Database, make sure the correct driver is selected when creating the Datasource.
  • If you use a connection string, include driver={SQL Server Native Client 10.0} in the connection string.
  • If you use Windows' Data Sources (ODBC) utility under Control Panel / Administrative Tools, ensure you are using the right driver.


If you are getting errors when attempting to use VFP's Date, Blob, Memo-Binary, and Varbinary fields to map to SQL Server's date and varbinary fields, use the correct ODBC driver.
INDEX...UNIQUE should not be used to enforce uniqueness of a value as that is not what it does.
INDEX...UNIQUE causes the index to only reflect one instance of each unique value of the index expression in the table. For example, one might use INDEX...UNIQUE to obtain a non-duplicated list of states from an Address table.

The "unique" index has some insurmountable problems. It cannot be depended on to be maintained dynamically. Because Fox only updates an index related to the record being altered, certain conditions can cause a Unique index to become incorrect. For example, a unique index on state will reflect one record for each state. In a table there may be 100 records for the state of NY, if you delete the one reflected in the unique index VFP will update the index by removing the reference to NY for that record, however since only the altered record is considered in the update of the index there will now be no reference for the state of NY in the index even though there are 99 records in the table for that state. (This situation can get even worse).


If you need a non-duplicated list of values from a table the Unique index can be used. However, it should be created, used, and then erased as these indexes will not be kept accurate dynamically.
PUBLIC variables
Variables are often declared PUBLIC when a private would work just as well. Since PUBLIC variables can have severe and difficult to find side effects it is a very poor practice to declare them when they are not absolutely necessary.
See Public And Private Variables (for a more complete discussion of this issue).


Declare your global variables as PRIVATE in your main calling program rather than PUBLIC.
Macro Expansion
Macro expansion is sometimes used in places where another approach (EVAL() or Name Expression) would work as well. Macros are also used in places where an alteration in system design would preclude the need for them.

There are things that can be done with macro expansion, including some data-driven approaches, that are not possible any other way.


Use the EVAL() function or a Name Expression rather then macro expansion wherever possible.
While inheritance is arguably also among VFPs most powerful and flexible features it is commonly overused and/or abused by VFP developers. Many misuses are possible, signs of which are DeepClassHierarchy, InheritanceAndComposites,
ImproperAbstraction, lack of design refactoring, and others.

Use inheritance only where the potential new subclass "Is A" parent class. Avoid creating hierarchies below populated Composite Classes.

Don't forget that a DancingFrog 'Is A' frog... and a BurpingFrog 'Is A' frog, what about a DancingBurpingFrog... 'Is A, what?' Sometimes just because another class 'Is A' existing class, doesn't mean that you want to subclass it... Aggregation (Wrappers) and BehaviorClasses can be used to keep the hierarcy nice and flat, but the posibilities endless.

See also Adaptive Inheritance, Extensibility Inheritance, Inheritance, Inheritance Heuristics, Inheritance in Java, Inheritance Reuse, Decorator Pattern, Decorating with This _ Access.
Packing a table is a time consuming and dangerous process that is better left to an administrative type of maintenance functions and not used in the mainstream flow of an application.

This is not to indicate that packing tables as a maintenance activity is wrong. Rather that packing is not a process that should be done "interactively" meaning while the user is waiting for the system.

Packing is time consuming and requires exclusive use of the tables and therefore is better left as an administrative maintenance activity.

Because of the way PACK works (it copies all undeleted records to a Temp file, kills the original, and then saves the temp as the original), there is a period of time (where the original has been blown away and the undeleted records are in the Temp file) in there that a power loss could mean a complete unrecoverable loss of data. You can easily write your own packing routine without using PACK to be safe.

PACK copies the UNdeleted records to another file but if it encounters NO deleted records through the process then the (new) copy of the table is deleted (leaving the original table fully intact and in use) and the REINDEX is skipped. However, the .FPT will still be analyzed and it will be compacted if warranted.

The way PACK deletes the original file and renames the temp file can cause the security permissions of the file to be lost under Novell.

PACK does an implicit REINDEX (See REINDEX below) but only if there actually was at least one DELETED() record in the original table being PACKed. If you must pack, it might be advisable to DELETE TAG ALL, PACK and follow the recommendation for REINDEX below.

While a PACK can result in (largely) UNfragmented .DBF and .CDX files for the table that is PACKed, the .FPT associated with it will remain fragmented even if the .FPT is compacted by the PACK.
Use the pack command carefully. It is best to do your packing as an off line administrative activity and not as part of the flow of your application. Remember, pack can require up to three (3) times the size of the table being packed to complete successfully. Actually, PACK can require no more than twice the size of the .DBF plus no more than twice the size of the .CDX in order to complete successfully.
The REINDEX command is dependent on a valid header records for the index file. Although it is very useful from the command window for a developer, it should not be used in program code to recreate indexes. Indexes should be dropped (DELETE TAG ALL) and re-created individually using INDEX ON ...

Also, be aware that DELETE TAG ALL removes persistent relationships as well. When using a DBC this is a dangerous command.

Avoid REINDEX. Use explicit INDEX ON instead. See GenDBC, GenDBCX, GenDbc Lite.

Writing your own code to handle recreating indexes and persistent relationships in the DBC falls into the category of VFP rookie mistakes. Buy the Stonefield Database Tool Kit, which does this and a number of other things for you very well.

If you have no other indication that the index is damaged then the REINDEX is perfectly safe to use.
Bull. Just because you can't tell it's broken, it must be working? REINDEX works perfectly well given valid input. However, rather than proving the inputs are valid, I'd rather rebuild.
Bull? Bull??? This has got to be the ultimate in paranoia! I mean, the exact same thing can be said for EVERY file on any HD. Why would the .CDX, and specifically only the header portion of the file, be any more susceptible to error/corruption than any other file on your HD???
I'm afraid that this is an "old wives tale" that besmirches the reputation of FP/VFP/ISAM for no legitimate reason.
It is one thing to say that you prefer to rebuild in order to ensure that your rebuild programming is accurate, but to use the invalid excuse that it may be broken is the BULL here. -- Jim Nelson
If you can supply correct template for REINDEX i.e. cdx file having correct "header" records, but probably outdated or empty leaves (say cdx file from installation media, or from some good old backup copy) then REINDEX will be perfect solution - and I bet is more simple and reliable than INDEX ON + all the stuff to recreate persistent relationships. By the way there are some limitations in INDEX ON and ALTER TABLE commands in VFP, so for example you can't programmatically define FK field that is in the same time PK field for the same table - to create 1-to-1 relationship, you can't create filtered PK (FOR !DELETED() is sometimes necessary) in old VFP versions, and mayde something else. In conjunction - it will be also not bad idea to restore dbc/dct/dcx files from this backup or initial archive during reindex process. -- Igor Korolyov
Grids are wonderful tools, however they are often at the root of a poorly designed UI that will not scale to Client Server or the net.

Have the user identify the record they desire, run the query on the server, and show the matching record(s). If there are multiple matching records, use a grid to show the limited number of matching records.
Combos and Lists
These two controls are great when they are used correctly. However, too often we see questions about how to get my combo of 10,000 customers to work faster.

Keep in mind that a person has to navigate these controls to find what they want. Ask yourself, would you want to use a combobox to select one out of 10,000 possible values?


Don't use Combos and Lists for large numbers of options.
OKLs are intercommand interrupts, that is they will interrupt running code. When you need to interrupt code they are the only way to do it, however they are often used to provide "Hot keys" to functionality that would be better handled through a Key Press event.

OKLs can be very helpful in your development environment for things like setting debugging switches.

OKLs are also useful when you want the action to interrupt running code as that is exactly for which they
are designed.


Don't use OKLs as a replacement for short cuts or hot keys as they can cause problems that can be very difficult to
debug later.

An alternative to OKLs for shortcut or hot keys is to DEFINE POPUP with bars for each short cut key and then never ACTIVATE the popup. The short cut keys will work even though the menu is not visible and they are automatically limited to user input wait states (won't interrupt code).
Can you provide sample for this technique please? (Define popup with no activate) -- Zlatin Zlatev
DEFINE BAR 1 OF hotkeys PROMPT 'browse' KEY ctrl+b

Keep in mind that the above code will not work within the IDE (where other shortcuts may take precedence - but it does work well in application code)


A POP MENU command following a PUSH MENU command does not delete the pads ON KEY LABELS created from within the .MPR. The pads must be explictly RELEASED when the ON KEY LABELS are no longer desired.

DO mymenu.mpr

would retain a menus pads KEY defines until RELEASED.
PARAMETERS() returns the number of parameters made on the last call. This means that it is not necessarily the number of parameters passed to the current routine. Many things can cause the return to be other than expected, i.e. OKL fired, another function as called before the PARAMETERS() was checked.


Use PCOUNT() instead -- it always returns the number of parameters passed to the current routine regardless of any other calls made since.
Non-Standard Menu Structures A lot of people are familiar with the "File, Edit, View..." menu structure. Why are so many developers compelled to break this standard? Where possible, try to stay within the boundaries of what users have come to expect from UI elements, especially menus. The whole idea of keeping to standards is to make things easier for the users.


Try to not deviate from menu standards.
WAIT WINDOW should be used sparingly because it is not a Windows standard. It may be OK for debugging purposes (although better options exist like Debug Out and Alert). Production code should use Messagebox() or Steven Black's public domain Msg Svc() or non-modal screens showing progress, thermometers, etc.

Another view: it's very nice for status messages that don't need to be acknowledged: those that show progress so the operator doesn't wonder if the program is locked up.

For showing status, you can use a disabled edit box just add a line of text (preceed the text with CHR(13)+CHR(10) to move to a new line) to its value property for each message you want to display. The advantage is that the user can also see previous status messages.

If you want to mix status and progress, use a listbox instead of an editbox. You can add a new item to the list for each status report. But if you want to have a progress message (e.g. Formatting 1 of 20, changes to 2 of 20, etc) then you can update the last item in the list. Also you can then write in a final status (e.g. Formatting complete) after showing progress.
REPLACE is a scoped command and its default scope is the NEXT 1 record. If you execute REPLACE ALIAS.FIELD WITH ... and the current work area is at EOF() or has no table open, then no REPLACE will occur because there is no NEXT 1.
Here's some code to demonstrate:
OPEN DATABASE (HOME()+"/samples/northwind/northwind.dbc")
USE Categories
USE Categories AGAIN ALIAS CategoryAlias
MOVE WINDOW Categories TO 0,0
SIZE WINDOW Categories TO 20,80
MOVE WINDOW CategoryAlias TO 0,50
SIZE WINDOW CategoryAlias TO 20,80
SELECT Categories
REPLACE CategoryAlias.CategoryName WITH "Nothing Happened"
WAIT WINDOW "See? _TALLY is "+ALLTRIM(STR(_TALLY))+" Now hit any key to do the replace!"
LOCAL m.lcOldValue
m.lcOldValue = CategoryAlias.CategoryName
REPLACE CategoryName WITH "Something" IN CategoryAlias
WAIT WINDOW "This time _TALLY is "+ALLTRIM(STR(_TALLY))+". Hit any key to put it back and clean up!"
REPLACE CategoryName WITH m.lcOldValue IN CategoryAlias

There must be a table open or in USE in the current work area or REPLACE will prompt you for a table, unless you use the IN clause. When the following prompts you, just hit escape and then choose ignore. Here's the proof!
OPEN DATABASE (HOME()+"/samples/northwind/northwind.dbc")
USE Categories
replace categories.categoryname WITH "Nothing"
USE IN SELECT("Categories")
-- Mike Yearwood


Either select the work area in which your replace is to occur or use the IN clause of the REPLACE command. Further, since you're already either in the correct alias or specifying it via the IN clause, you need not specify the alias for each field.
Avoid using SET DATASESSION when a form is active and use it sparingly otherwise. Using SET DATASESSION can often cause problems that prevent objects from releasing and that are difficult to debug. Also, be aware that when method code is running VFP automatically switches the data session to the session the object was created in. Objects can talk to objects created in other data sessions by retrieving a reference to the object or by passing object references as parameters. If you address data via properties and methods rather than by directly accessing tables it's easy for objects to access data that is open in another data session.

I imagine that this is an often abused SET command because it's one that is hard to work around in certain situations, for example: every form that has a private data session has all the SET's (SET ECHO, SET TALK, SET DELETED being the most annoying to me) initialized to the VFP defaults. I'd like a single place in my applications to initialize these for every form, and I'd like this place to be in the application object (rather than in a .PRG file), however, when the form datasession initialization method of the application object is called, the datasession is automatically switched to be the datasession selected when the application object was created. This creates the circumstance that the only way for that method to change the SET TALK, etc. settings for the form is to SET DATASESSION TO the form's datasession, then set it back (This is very important to avoid changing the app object's datasession - see the Data Session topic: changing the datasession with SET DATASESSION changes the datasession for an
object permanently). As a result, until everything stabilized, there were many "problems that prevented objects from releasing and that are difficult to debug"... what would be a better approach? (I would suggest classing all forms in an application off of a single form class which knows how to set up its own datasession)

Note: When a form uses a private data session the data session is created before the form. From within a Form's Load event the form's private data session already exists.

VFP Object is a COM object idiom.
See When Fox Objects Are Not Really Fox Objects. Applies to using FOR...EACH to iterate native VFP collections, and objects eminating from properties and collections of the _VFP system memory variable which is typically used for COM automation.
This is fixed in VFP9 with the FOXOBJECT Keyword addition to the FOR EACH command.
  FOR EACH objectVar IN SomeGroup FOXOBJECT

Contributors: Jim BoothOffsite link to
, John Koziol, Cindy Winegarden, Alex Feldstein, Ed Leafe, Steven Black, Mike Feltman, Barbara Peisch, Mike Yearwood, Jim Nelson

Category VFP Commands, Category Tool Abuse, Category 3 Star Topics
( Topic last updated: 2012.04.01 12:36:03 PM )