Practical Limits on Store Max Records

Aside from the obvious upper limit of total hard drive space, has anyone run into issues with larger numbers of max records?

I’m specifically looking at setting the max records to 1,000,000. My, perhaps unfounded, concern is there might be issues with handling the cache at that scale. (My understanding is that the cache is a flat file.)

Anyone have any experience with storing records at this volume?

If storing in a database table, it shouldn’t be a problem. We have at least one client with almost 3,000,000 (three million) records. If your number of records gets closer to 2,000,000,000 (two billion), you may want to make sure to use the “newer” BIGINT instead of just the INT. If there is even a chance you may get close to 9 quintillion (9223372036854775807) or if replication is involved, you’ll probably want to switch to the GUID.

We always go bigint. We have databases with several million records, of course.

I don’t know if I was explicit enough, this is specifically in reference to the Max Records under “Store” in the store & forward system.

Yes, I’d advise against this. The store and forward system doesn’t use a flat file, but instead stores the binary objects to an internal database. As a small aside, this also means that “1 record” isn’t necessarily 1 value… it could be thousands of values (all of tags in a particular historical scan class, for example).

Experience has shown that there certainly is a point where the time it takes to store and retrieve the items from the local cache becomes greater than the time it takes to write them to the database, which is a bad situation. This can be helped by increasing the “forward size” to the final db from the default 25 to something bigger, like 250, but ultimately you will eventually still hit a wall.

One of our top goals for 7.8 is going to be to improve this system. In the mean time, I wouldn’t recommend setting it greater than 50k or so.

Regards,