Though this error may be seen as the opposite of Error Reading File, and the contributor of this text having this error regularly, somehow he has never had the Error Writing To File. In his opinion, the latter should just as well happen as the Error Reading File, but just does not. What may be the explanation ?
Firstly we must think that the Error Writing To File can -and will- happen just as well, in those environments that the Reading error occurs; in those environments where the Reading error occurs, but the Writing error does not, thus must be caused by not performing a Read first (before the Write). This can occur in 2 situations :
1. It concerns an internal operation of Fox just not needing Reading first;
2. It concerns an App which formally is not working properly thinking of not Locking the record where it should.
Note on 2 that there may be not any proof on this, but just is derived from never having the Write error, where the Read error occurs;
Thinking of a Write error happening after the last sucessful read, it just may be the time in between them, where the connection gets lost. However, this must be seen as too much coincidence, because then the Write error should occur too. IOW, when the Read error does occur and de Write error does not, a re-read must be performed by Fox in order to process the Write (and the Write erroring never comes to the Read error). Since the time in between these two is very little, chances that just there the connection is lost, is nihil.
When this is right, a locked record somehow will imply some read on the server, which may become logic thinking of the client needing to find out whether the record is locked in order to write. Now thinking further, things are different :
in order to write, Fox WILL just lock and therefore reads from the server first in order to see whether the record is locked by someone else.
Conclusion for now : It must concern 1;
Anyone having the Write error should now find out whether there are Read errors too, and when not, it just may be a Fox or Win bug. This may be in the area of a wrongly mapped Lock-bytes, and f.e. the not adding of the 2 GB.
-- Peter Stordiau
Update to below - Well, I am finally here on-site to check things out first hand. What a mess! Cables run everywhere (including under file cabinets!), mixture of network cards and speeds, etc. This is going to be fun to untangle. I'll post an update of how much hair I have left later this week. Meanwhile, where is that program that someone wrote that works the network from Foxpro? -- Randy Jean
I agree with your conclusion. I've checked everything in the App I can think of. Of course, as I mentioned, a loader program and views only DBC might help by virtue of reducing network traffic, open files, etc. If only I could get the client to approve this or get my company to agree to do it for free. But, as I tell my clients, this is really treating the "symptom" and not getting to the root cause. I have gone back and looked at the error log and there are a few Error Reading File errors, but not nearly as many as Error Writing To File (about 30 to 1) in this particular app. I don't know why this is, however, if this were simply a matter of a locking issue within VFP, there are errors to report this more correct vs. errors that point directly to an OS or Network problem of some type. (Both of these errors simply say that the OS has returned an error). Could it be due to the size of the record and number of index tags? This mostly occurs on the update of a table which has many fields and many indexes (I would not say it is inverted, around 1/4 of the fields are indexed) -- Randy Jean
A larger number of fields may imply a bigger recordsize;
Personally I can imagine (not much more than that), that one record expands over more datablocks (dbf), and then I mean 3 or even more. This should not give problems, however, once only a few fields are being Replaced in one Replace, a first field may be in a first block, and a second field in a second block, and a last field in a third block. Now I expect things can get stressed particularly when the second block is NOT used (ie no Replace-field is within there), and a first and third are. Referring to my VFP Corruption topic I "feel" that things go wrong somewhere in this area of logic, though our record-size is allways smaller than the blocksize. What I'm saying is, that somewhere at blockboundaries things can get out of order, and where "two blocks" are allways a subject to deal with by Fox - in combination with the Network-OS - three bocks maybe more difficult.
To "proove" I may be right on this (being it more difficult), one should tell (me) how the refreshing of blocks operates, where I already know that only the block where the recordpointer is, gets refreshed. Now of course, when the record expands to another block (ie spannes over two blocks) things should go allright, and I don't expect things to go wrong here, because otherwise we should have encountered problems on this area; this spanning occurs once in the few records, thus very frequently. However, putting a sniffer on the network cable, shows that allways only one block gets actualized, and ... while looking at this, we ourselves never looked for the happenings at block boundaries (may not even be possible). To show what I mean, and to emphasize on Fox may be wrong here somewhere : Skip through a table containing a number of datablocks (that many that it takes 10 secs) so they all will be in the PC's cache, then Set Refresh To 1,1, and then Skip through the table again; once per second one block comes in form the network, being that block which holds coincidentally the recordpointer of Fox. Again, what happens when the actual records spannes over two blocks, I don't know (will now two blocks be actualized ?).
Just putting a Browse window on the screen (Set Refresh To 1,1) and te sniffer on the cable, shows that each PC may deal differently with this. Where I/we don't know what the logic underneath really is, you may see something like : first read 1 record, then read 2, then read 4, then read 8 ... and somewhere the number of records read become a kind of stable (may once a complete block is there, and then the number of records read are at blocksize level). Where I don't really want or need to know what the logic is here, I do know that it depends on the PC's params (looked at Novell only, but with MS-Client) how this behaviour is. To me this only prooves one big thing : when having errors in this area, the PC may be highly influencing. And that's not nice.
To the above I can add, that -when writing- data is put to the network only at block level, and not at two-block level, or less-than-a-block level. That is, looked at it logically. I mean, analyzing what is shown due to the writing PC and what is due to the reading PC, is hard to determine, and can only be derived from knowing the theoretical aspects, or better : just knowing how it all works really. And I don't, and so I just can conclude things out of what is shown to me.
The funny thing is (or just is not), that where we can only look at a reading PC in order to make conclusions, we just as well can look at the reading PC only, because it is there where things may start to go wrong. Or :
Where the writing PC writes at record level and may perform an Unlock per record an thus puts out a block written to several times, the reading PC just sees the block when it became full. Now whether the writing PC put out the block when it is full or not, the reading PC only sees it when it became ful. (note : this is hard stuff and not easy to explain in writing, but setup two PC's one only appending records, and the other only reading the table written by the other, and put as much effort in in order to view the actuality of the data at the bottom of the table).
With these lines, I am only saying that somehwere things operate too much at one-block level, where more (or less) than one block is subject to the logic of the app or even Fox itself.
What to do with the above ? I am not sure myself, where I almost can proove things should go wrong knowing this, and writing some program trying to stress this, but ... I can't. So, all "know" more of all than I can think of, and the Network-OS is more intelligent than I am (is not too difficult).
To be more concrete on the subject, I mention two things :
When the record size is larger (let's say > 800 bytes) and where there are more indexes (> 15) a For clause (Index On) may imply that in index-entry won't be there, depending on the sequence of things in the program (don't know by heart of the how and when, but you canget around by changing the seuquence of things). This won't lead to corruption, but the datarecord can't be found via the concerning index.
We have this app where 128 indexes are used on one table. And no, this is not normal, but however was some patent-thing performing highly intelligent stuff;
I am not sure whether this was in FoxPro 2.00 or 2.5, but Appending a record to this table, got Fox looping or whatever, and Fox couldn't cope with it. Having no indexes connected, and performing an Index On afterwords, however worked fine.
The problem of Randy may imply that several indexes don't have unique keys, or : many index-entries will lead to the same key in this index;
Fox can't cope with this, and you will het a C5 somewhere, depending on the number of leftmost bytes being equal. -> so this is not really the index-entry not being unique, but a too large number of records with a leftmost part being equal, which Fox tries to compress as "the same as the previous one". Of course records not having a unique key, lead to the same.
One of the most important phrases of Randy in the above, is his "Both of these errors simply say that the OS has returned an error" which is so much right, and the error only differentiating between a Read - and Write error. Nice. Now since this is the case, we must wonder where the situations are that we don't get this error reported, but things go wrong anyway;
While working (developing) with a tool, I allways try to imagine that the tool in its kernel behaves the same as the tool visualizes its capabilities to you as the developer. Anyhow, I learned a lot from this. To this respect, I am thinking on things like the not being able to nest errors of Fox. Now two errors are reported from the Network-OS at about the same time (I think this is possible), and only the first is trapped by Fox, leaving the other untrapped. IMO this theoretically may lead to corruption, but which may not even be seen ever, because something else corrects it already.
When one PC adds a record, and at the other the Reccount() is not actualized for whatever reason, the other overwrites the data of the first, but also resets the Reccount() in the table header, and when the first PC adds again, the data will be added beyond the table-boundary leaving you with a Write error. No, never prooven, but just "maybe".
Where we ourselves (= all our customers !) never encounter an Error Writing To File, we do encounter Cannot update file requently. Bear in mind, this is the same ! but not really; IMO the latter is only there due to an "overal" write command such as Flush, and of which we have prooven that filehandles can get out of order (?) related to printing. In the end this is caused by wrong Novell-client params, where "wrong" means : conflicting with Fox' cacheing params. Thus, the Flush (implicitly performed by Fox in many commands, think of Quit and Close Databases), loops to all the active Filehandles, on "something" the (Network) OS says "oops" and Fox doesn't know where. Now bear in mind :
Where things prooven can go wrong in any Filehandle, this will (for me 100 % sure) go wrong the same if only this particular file will be closed (Use, or Use xxx in the same Workarea). I'll bet on that. Only, when it goes wrong by addressing this one file explicitly - and the Network-OS reporting just the same error - now Fox itself knows where it is, and reports the Error writing to file, anticipating on having helped you as the developer enough; this concerns THE file you are closing.
Now after writing all this (too senseless) stuff, I come to the conclusion that we indeed have the Error writing to file error too, occurring when client-params conflict with Fox internal working.
Since now this is my opinion, two things can be performed :
Find the error in the printing-stuff, by which I mean that the conecting of printers is performed illegally (ie use a Capture from a former Novell version, or have a Net Use where it should be Capture, etc. etc.) because this for us implied the Cannot update file (or VFP : Cannot update the table);
Switch all client-params (registry for NT because there is less in the Shell) on cacheing, opportunistic locking, etc. off.
If this doesn't solve the poblem, well, it is something else (nice huh).
-- Peter Stordiau
The table in question is 3330 bytes in record size. Count of records is close to 100k. 8 memo fields (is this bad?) 132 fields, 34 indexes. Most indexes are simple structural tags. I just noticed I still have an index tag on DELETED(). Hmmm.. Thought I got rid of that a long time ago. Could this be causing these problems? -- Randy Jean
ON DELETED() ? or ... ON NOT DELETED() ? -- Peter Stordiau
From Hack Fox: By creating a tag on DELETED(), you let Rushmore do the checking instead of looking at each record sequentially, which makes the whole thing much faster. The larger the result set, the more speed-up you'll see.
Do not create indexes with a NOT expression for Rushmore optimization. Rushmore ignores any tag whose expression contains NOT.
Since I run with SET DELETE ON, this is supposed to speed up queries (which I do a lot of for reports, etc.) Unless I'm reading this wrong -- Randy Jean
For starters I (we) never depend on Rushmore, because at us doing stuff without an index is not allowed. Please bear in mind that I do not know too much about Rushmore, i.e. I don't know whether optimization takes place including the use of an index (I know, I just should read Fox Help). However :
How should Rushmore benefit from Index On Deleted () ? Okay, when looking for Deleted records with Set Deleted Off maybe, but I don't think that is what you are doing Randy. What you almost litterally say, is that you use Set Deleted On thus looking for existing records, and let Rushmore optimize this by (you) using an index that contains Deleted records only ? No, this can't be true. Please keep in mind that some indexing on "another index" never has to do with the one you have active (Set Order) right now; though I don't expect you to be confused on this. What Hack Fox says on this ? (didn't check) I don't know, but it can't be true (as you stated it).
What I do know however, is that a For Clause (Index On) on NOT DELETED() will speed up things significantly while performing a SKIP, since all Deleted records will not be read from the data (dbf), the index being leading and just not containing the deleted entries. This has nothing to do with Rushmore ! Now coming back to my first sentence, Rushmore will optimize on any Skip as far as I know, so yes, it will benefit from this For Clause too. Okay, or just not when a NOT isn't allowed there (yes, I have this in mind indeed). But never mind, the ON DELETED() doesn't make any sense unless one looks for Deleted records (or it may be too late for me overhere (12:20 am).
Anyhow, back to the Write error, for sure I don't think things gets stressed because of your index (when it's true after all), because with Set Deleted On Fox just will never find any record... Find ?
Well, maybe if you perform a Recall or just perform a Set Deleted Off and Replace or Delete again ? ... I'm not that sure whether Fox expects this kind of rearness (sorry).
Maybe a. you look whether I am just right on this and b. when yes, try to remove this index or change it into ON NOT DELETED() (Rushmore or not).
On this subject I have in mind the UNIQUE clause doesn't do what it promises too, leaving you with non-unique keys after fuzzing around with Deletes and Appending (same key) again, or the other way around. So somewhere there a Fox error exists, and you never know ...
-- Peter Stordiau
Peter, UNIQUE should never be used to enforce uniqueness. I have never seen a need for a UNIQUE index and NEVER do a SET UNIQUE ON, ever. I've actually had to fix other so-called "programmer" mistakes along these lines. Caused years of grief for one particular client because the previous programmer could not figure out that his SET UNIQUE ON not being turned off after a certain program was run was wreaking major havoc on their data's integrity. (See VFP Misused And Abused) -- Randy Jean
Contributors Peter Stordiau, Randy Jean
See Also: Error Reading File
Category Configuration Management, Category Error, Category VFP Troubleshooting
( Topic last updated: 2001.08.07 01:37:43 PM )