Remote datalogging

I have looked without much success for a device which I can install in remote locations, primarily to carry out datalogging, but also with the possibility of local control. It will have to be able to log locally to a non-volatile store (in case of local power loss) and send logged data to a central store as required. It will potentially have to cope with holding several days of data before it is emptied.

I read with interest Nathan’s post on reading historical data from a PLC. The only downside I can see is that this depends on PLC memory, which tends to be rather limited/expensive. This might work if a PLC exists than can log (simply!) to CompactFlash media or similar.

We are not keen to use a PC in each remote location due to cost and reliability issues. I would be interested to hear if anyone has found a product which fits the bill.

Do you mean for comm loss? If there is local power loss, how does the PLC keep functioning?

I’m sure others have more informed suggestions than I do, but for what its worth…

There are quite a few hardened industrial, fanless PCs available. I’m not sure if you meant reliability of the PC hardware, or the software. FactorySQL of course does what you describe (including local data caching on comm loss), but certainly requires a PC.

I think there are various hardware dataloggers available. Perhaps your requirement for local control limited your search? Maybe a 2-device solution would work best (cheap PLC for limited loal control plus hardware data logger). I’m not sure how these hardware dataloggers work - I don’t know how you suck the data out of them.

Good luck!

Al,
A few things:

  1. Our model recommends a PC, which can be DIN mounted or “industrial”, that is reasonably “close” to the PLC, but can be “distant” from the central SQL database. We recommend the PC for it’s cheap memory capability and processing power. “Close” means that you’re unlikely to lose the network connection between them. The PC and PLC may go down together during a power outage (you’ll inevitably lose the data for any period the PLC is down, but don’t want to lose prior data). “Distant” being acceptable between the PC and SQL database because of FSQL’s local data caching.

  2. My post was a way to answer the question that I’m constantly asked of “why can’t we cache historical data in the PLC with FactorySQL?” It’s kind of a hack and doesn’t fit our model well. In fact, it is only helpful for the situation where the PLC and computer are both up but the network is down between them. This shouldn’t be a common situation. If it does happen the focus should be on the network not FSQL and the PLC.

The fact that we’re discussing trade offs reinforces the fact that this situation could be improved upon. IMO the best solution would be a cost effective embedded computer (DIN mounted, DC powered, NEMA rated, etc) that ran a full version of FSQL (PMI too for small installations) and interfaced with cheap solid state memory (CF, smartmedia, etc). You might be able to configure a Sixnet PLC or something similar to achieve the same effect with FSQL. In theory, if OPC (servers and spec) dealt with historical data “better” there could be a nice way to implement FSQL where the data was cached in the PLC in a vendor neutral way. Dreaming further, this would be really cool for slow connections since the data being transferred could also be compressed.

There are hardware dataloggers. I haven’t used them, but am under the impression that they’re cheesy compared to the functionality of a PC and decent program. I think they typically store the data as text - using separate files per day and can either be set up to upload them periodically via ftp or have a simple web server built in. I have no idea how much they cost. They could be right for certain applications and may be usable with FSQL/FPMI. If I hear about one that is, I’d be happy to recommend it.

Thank you for your thoughts.

I’m sure you could engineer a solution with a hardened, solid-state PC and FSQL, but the issue with this approach is one of cost – each installation will cost several thousand dollars. A recent job we looked at required local datalogging in each of 26 wind turbines, so cost of each installation was very important.

Having a situation where the PLC and computer are both up but the network is down is in fact very common in some industries, for example where you have remote outstations without a permanent connection to a central control room. In this situation the control room will connect to each outstation maybe once a day using a dialup modem (for example) to download information that has been collected.

The reason I was interested in Nathan’s approach was that it was very generic. It also had the advantage of being a single box solution, instead of having to install a separate PLC and datalogger. If the datalogger and PLC are not linked via some kind of network, this approach can become complicated, with a requirement to feed inputs to both the PLC and datalogger.

I have looked at a number of dataloggers but the problem comes in trying to get the data out of them and into a relational database. This would need code to read the data as a file, then code to step through the file, extract its contents and put them into the database. I much prefer the approach of using standard FSQL where possible.

I still think a PLC with easy access to non-volatile storage such as CompactFlash would be the best solution.

As an aside, a variation on Nathan’s approach would be to have 2 pointers tracking through a circular buffer. One pointer would track where the PLC was in the buffer, the other would track where the PC was. If the PC was ‘behind’ the PLC, it would read data until it caught up – some mechanism would be required to cope with wrapping round from the end of the buffer back to the beginning. If the PLC ever caught up to the PC, the PC’s pointer would be pushed ahead of the PLC’s, losing data. The advantage of this approach would be to avoid having to move lots of data on each read if the buffers were large and lots of data had backed up. Maybe this is an unnecessary complication.

Regarding the time tag in the PLC: would it be possible to record something like the number of seconds since the start of the year, then translate this back to a time and date in FSQL before saving it to the database? This could be held in a 32-bit integer rather than a 38 byte string, greatly saving on PLC memory.

Al

Al,
Thanks for jogging my memory - wind turbines or anything on a “part time” (satellite, dialup, or leased line) are really good examples of that specific case. You also bring up a good point about the economy of being able to use a single central FactorySQL setup instead of requiring something additional at every site.

A mechanism to wrap/overwrite data makes sense as you overflow past your memory bounds - this would be set up in the PLC. Having 2 pointers to track where the PC is versus PLC is seems functionally equivalent to what I described - in both of our examples FactorySQL should only read each value exactly once. You could easily implement your idea by triggering the group based on “pointer1 != pointer2”. Neither example is doing a block data read from the PLC, which would be a more efficient in terms of data transfer and time (although you could read value+timestamp together). Your example adds the unnecessary benefit of storing cached history in the PLC even after FSQL has read it, while mine performs more operations in the PLC to conceptually simplify the situation by dumping the value from the PLC after a read. As far as I can tell we’re both doing pretty much the same thing.

You could set FactorySQL up to work with the PLC and store dates in a more efficient manner. If you use a 32 bit unsigned integer and only need precision down to the seconds, you can choose a starting date - it gets you 136 years worth of seconds. If you need to be tight with memory, I’d use a 24 bit unsigned number as seconds for a total of 194 days since FSQL’s last read. The rolling reference also can introduce potential pitfalls.

A few things to consider:

  1. Keep in mind that the PLC and PC are using separate clocks. FactorySQL can synch time with a group that writes it’s HHMMSS to PLC registers periodically.
  2. In the rolling reference example you would want to buffer up your data to a certain size before performing the read operations and time synch reset.

To accomplish this:

  1. Create a queue or circular buffer as described, storing the “date” as number of seconds since some date, say Jan 1, 2007.
  2. Create OPC Item(s) that read your historical value(s) and seconds. You could do some ridiculous bit banging here if necessary.
  3. Create action items as necessary as expressions if you need to de-couple the OPC values. The simplest case has no action item here and two OPC items: value and seconds.
  4. Create an action item that is an SQL INSERT query like in my example. The only difference is that it should include a date addition function that writes the date_time + seconds to the t_stamp column.

Your suggestion is really a minor modification and a clever idea. You can save memory by being clever with your bits, but I wouldn’t go there unless it makes a big difference - your time is worth something and it makes it obscure for the next guy.

Nathan,

Point taken about keeping things as clear as possible - maintaining other people’s code is hard at the best of times.

As a digression, time synchronisation is something we had a look at. If a customer only requires events to be related to other events on the same machine, the time only needs to be loosely synchronised between PC and PLC. However, this would really have to be done when the local cache was empty as you point out or the order of events could get mixed up. If the customer wants inter-machine synchronisation of events, things get complicated. You have to use either specialist devices such as Allen Bradley’s synch modules for ControlLogix, or you need Ethernet switches with GPS modules which allow accurate time synchronisation using SNTP - Westermo makes switches which can achieve an accuracy of 1µS - sounds impressive, although I’ve not used them.

I loved your solution of adding the number of seconds to your base time in the SQL INSERT query. In true VB fashion I was thinking about huge expressions to calculate the number of days, hours etc. and build a date/time string before writing it out. I’ve read posts where you and Carl have been keen to stress the use of built-in tools and functions to create simpler, more elegant (and more understandable) code. Good advice :slight_smile:

Al,
Thanks for the compliment. Keeping time synchronized between PLCs can be done easily. I’m not speaking to very specialized cases where you’re using timing for GPS, deterministic, or tightly bounded timed applications - those could certainly call for an engineered solution with appropriate hardware.

It’s pretty easy to set up a FactorySQL group that sets the PLCs time (and date) based on the SQL database’s time. A similar group can be set up for each PLC that FSQL can talk to. It doesn’t matter if they’re different brands, represent “time” differently, etc.

Suppose you wanted to update all the PLCs clocks on the hour. Create a group for each PLC that “writes” the correct time. In most cases this will be a separate integer register for year, month, day, hour, minute, second. You might have to get the format to ms since something, a big dateTime string, etc. Each OPC item should be “read only” - you’re introducing them as a reference, not storing their values in the database. You would then create an “action item” that is an SQL query that gets that “piece” of the time - second, for example. That action item would then “write back” it’s value to the PLC register. You then create an action item to use as the trigger. I would use the same one for each PLC clock. I’d recommend using the HOUR as determined by the database and trigger on a value change. The group update rate tells how often to inspect. Once a minute makes sense in that example.

The beauty of keeping all PLCs synch’d to the SQL database is that you have a common timing source, even if you scale to multiple computers with FSQL, PLCs, etc. Better yet, most historical data “logs” using now() or CURRENT_DATETIME, which is totally with respect to the SQL database.

You are correct that changing the PLC time will change the logged time of cached data. However, our described implementation never takes PLC/DB time discrepancy into account. Therefore a time delta, error, is introduced regardless of whether our queue was piled high, empty, or somewhere in between. We have no way of tracking when this occurred and for which points it’s relevant. Net result - changing the synch’ing of clocks based on queue size won’t make a difference.

You could introduce logic that: waits for the queue to get to a certain size, checks the delta, and logs accordingly. This might be a little more accurate but still depends on when timing went off and the rate. Heck, it could introduce error. Bottom line - way too much complexity for questionable gain. Synchronizing the PLCs periodically based on the DB clock with FSQL should keep “very” close timing compared to our logging rates - you’ll want to verify this on your own system, of course. As you alluded to, if you truly need to keep times really close - engineer an appropriate hardware solution instead of attempting ridiculous hacks with FSQL.

Al, here is something to look into. It is a data logging device we intend on using and seems to have all the requirements you are looking for. The web site is heapg.com and the device is the Horner XLE. It has a compact flash unit on it which I think is currently maxed at 2gig. Your choice on the size to use. And in the comming month they are set to release the firm ware which will allow you to remote FTP to the device and pull the data file from the compact flash. Then you could use a simple program to manipulate this CSV file into some other format and you could run a query, should you be using SQL, compare the two and fill in any gaps or missing data. The PLC is easy to program and has plenty of options such as Dial up modem, TCP/IP and Wireless. All the details are on their web page which I gave you above. Just something to look at if your interested.

Have a nice day.

[quote=“AlThePal”]I have looked without much success for a device which I can install in remote locations, primarily to carry out datalogging, but also with the possibility of local control. It will have to be able to log locally to a non-volatile store (in case of local power loss) and send logged data to a central store as required. It will potentially have to cope with holding several days of data before it is emptied.

I read with interest Nathan’s post on reading historical data from a PLC. The only downside I can see is that this depends on PLC memory, which tends to be rather limited/expensive. This might work if a PLC exists than can log (simply!) to CompactFlash media or similar.

We are not keen to use a PC in each remote location due to cost and reliability issues. I would be interested to hear if anyone has found a product which fits the bill.[/quote]

mrtweaver,
Thanks for the recommendation. I should elaborate on my earlier post and shouldn’t have used the term “cheesy”. Those devices are often a perfect fit for a particular problem. Suppose you had to log data 30 days back in case there was a problem, but in most cases nobody looked at the data. Compared to a PC based solution, this device is cheaper and has less potential points of failure. FTP access (or a builtin web server) provides simple access from anywhere, and the CSV format (hopefully zipped) is easy to read on any computer and manipulate with Excel.

These devices get knarly and loose much of their benefit when integrated into the Enterprise - particularly when you want to deal with data in SQL databases (for IT support, maintenance, backups, distributed visualization, analysis, etc, etc). Any way you cut it, you’ll introduce all the points of failure for the PC based approach plus this device. You’ll need to deal with transferring the file via FTP, which assumes that an FTP server is up AND implies historical data lag time since you’re probably doing an infrequent batch transfer. Writing a script to read the CSV into the SQL database isn’t hard, but something (probably a computer) has to do this periodically. Dealing with multiple logging data sources and missing/overlapping data is a big pain to deal with. It’s not that integrating these devices into your process can’t be done - it’s that they add too many moving parts. You can do better, simpler. They’re very cool and work well for what they were designed.

I interpret Al’s request as a device that basically does what FactorySQL historical groups do. The technical difference is pretty subtle, but it would play nice with an enterprise system. The device would log data remotely to an SQL database and have a local cache. It’s a pretty simple concept that could solve a lot of problems. I’m guessing that vendors don’t do this to avoid stepping on their “Enterprise Software” space. Also, given these capabilities, users will feature request this thing to death. They’ll need to support different SQL database types/schemas, want to log at variable rates or based on events (triggers), want to scale/format data, transfer data in batches, compress the data, support some form of redundancy, etc, etc. This leads to a computer and FactorySQL, RSSQL, inSQL, or whatever.

Also, I checked out the XLE. It looks like a cool all in one device. Basically an expandable Panelview type device with it’s own built in PLC, MicroSD memory reader, and web server or modem (I think). It looks to provide some pretty serious stand alone capability.

I do agree with the FTP points you made, however if the end user would like to use web based access instead of FTP access then they can go up to the next level of this make of PLC and get the NX, the NX does support web based access and it comes with a demo program to show how this can be accomplished. There has been talk about making the XLE somewhat beefier by having it do a lot of the same features of the NX but I dont know if that is just talk or if they are actually looking into it. I know for a fact that the XLE upgrade for FTP is comming very soon it is in beta testing whether or not that includes web based access is unknown at this time.

As for dealing with conversion so one could compare what has been written on the compact flash with what has been written to say something like an SQL server, our data processing dept has written a little program in FoxPro which does the conversion from CSV and compares the data. As long as both the CSV and the SQL are written with the same types of data I am told this is an easy process. I dont have the skills or knowledge to do such a task or elaborate on it much further than what I have been told. So as for ease, can not guarantee.

Although I have not done it as yet there is said to be some new functions comming in the future that you could monitor a connection and if the connection goes down you can save the current pointer while still logging to the compact flash then when the connection is re-established it will start where the pointer was and catch up. But again this is something I have not done yet so dont know the complexity.

I do appreciate your feed back on this device and I sure you have more knowledge than I do about the finer workings and such. This is why I love this forum so much. It allows everyone to kind of sync up and see what all is available and what may or may not work based on their needs.

[quote=“nathan”]mrtweaver,
Thanks for the recommendation. I should elaborate on my earlier post and shouldn’t have used the term “cheesy”. Those devices are often a perfect fit for a particular problem. Suppose you had to log data 30 days back in case there was a problem, but in most cases nobody looked at the data. Compared to a PC based solution, this device is cheaper and has less potential points of failure. FTP access (or a builtin web server) provides simple access from anywhere, and the CSV format (hopefully zipped) is easy to read on any computer and manipulate with Excel.

These devices get knarly and loose much of their benefit when integrated into the Enterprise - particularly when you want to deal with data in SQL databases (for IT support, maintenance, backups, distributed visualization, analysis, etc, etc). Any way you cut it, you’ll introduce all the points of failure for the PC based approach plus this device. You’ll need to deal with transferring the file via FTP, which assumes that an FTP server is up AND implies historical data lag time since you’re probably doing an infrequent batch transfer. Writing a script to read the CSV into the SQL database isn’t hard, but something (probably a computer) has to do this periodically. Dealing with multiple logging data sources and missing/overlapping data is a big pain to deal with. It’s not that integrating these devices into your process can’t be done - it’s that they add too many moving parts. You can do better, simpler. They’re very cool and work well for what they were designed.

I interpret Al’s request as a device that basically does what FactorySQL historical groups do. The technical difference is pretty subtle, but it would play nice with an enterprise system. The device would log data remotely to an SQL database and have a local cache. It’s a pretty simple concept that could solve a lot of problems. I’m guessing that vendors don’t do this to avoid stepping on their “Enterprise Software” space. Also, given these capabilities, users will feature request this thing to death. They’ll need to support different SQL database types/schemas, want to log at variable rates or based on events (triggers), want to scale/format data, transfer data in batches, compress the data, support some form of redundancy, etc, etc. This leads to a computer and FactorySQL, RSSQL, inSQL, or whatever.[/quote]