The area beyond 2 GB is understood by many VFP developers to be used for record locks. When you issue RLOCK(), VFP supposedly locks one byte at
HEADER()+RECNO()*RECSIZE() + 2 GB.
However, VFP does not, in fact, lock at
(HEADER()+RECNO()*RECSIZE() + 2 GB), it locks at
(2GB - RECNO()), which can intersect real records in a particularly large table. You can see this with a file monitor like File Mon. See High Range Locking Bug.
Why does it not just lock the record at the record?
Because a lock prevents anyone from accessing the locked area, neither read nor write access is possible. By locking some virtual bytes, others still can read the record. This locking scheme is the reason that Clipper and FoxPro don't respect each others locks.
Is this the source of the 2GB limit, or is the choice of 2GB+xxx just making good use of a limit otherwise present? - ?wgcs
The 2GB offset resulted from the fact that when the locking scheme was developed no file could exceed 2GB in size on any platform that fox was running on. So 2GB was a safe offset to use. Over time things have changed. The big problem is that if the offset for locking is changed then prior versions of Fox would not respect the locks of newer versions of Fox. This is why the issue is not a simple no brainer.
-- Jim Booth
Everytime I've heard it explained, it's the source. -- Mike Helland
Not to be a conspiracy theorist but maybe the devil's advocate, wouldn't it also have to do with marketing a little? If the limit where extended, it might cut on sales of SQL-Server as many current and maybe future users of DBC/DBF would have one less incentive to migrate? -- Alex Feldstein
Alex, I can't say that your issue doesn't play at all in the decision, however, to me, the issue of backwards compatibility on the lock respecting stuff is a paramount reason to NEVER change the 2GB limit. Anyone using VFP that needs more than 2GB in an entity can either partition the data or use some other backend to store the data. Having a new version of VFP that totally precludes any prior version sharing data in the same environment is totally unaccpetable to me. So basically, I don't really care if MS is worried about VFP killing SQL Server sales or not, I don't want to see the 2 GB limit changed unless there is a provision for prior versions of VFP being able to coexist with newer versions. This is NOT an issue of code compatibility here. -- Jim Booth
Why not just have another compatibility switch ?
It is not only "compatibility switch" question - to remove 2Gb limit MSFT have to invent completely new DBF (and FPT and CDX as well) file format.
-- Igor Korolyov
Yes, you'd have to invent a new DBF structure, but it's not really that complicated to do so. Make a few reserved areas bigger, use 64bit integers for offsets... problem solved. No? -- Peter Crabtree
Contributors: Christof Wollenhaupt Carl Karsten Peter Crabtree
Category VFP Functions
( Topic last updated: 2006.01.12 09:34:41 AM )