Data transfer timing

From mrtweaver:

I have some questions that I am hoping someone might be able to answer.

  1. Is there a way to determine wire time? (I hope I have this term correct, what I want to know is how long it takes for my data to get from the PLC back to the PC and into the SQL table)

  2. If I have 40 tags going back for historical data, 2 are 20 char ASCII strings, 3 are 4 char ASCII strings, 24 are boolean bits and the remaining 12 are DINT. Would it be more efficient to use a standard group or a block group or it would not matter? I am using a trigger, this is why I am curious about wire time. I want to know that the data is stable then in FSQL when the trigger goes active it correctly grabs this data and sends it to the SQL table.

This kind of goes as follows. On the PLC we are using there is a compact flash card. THe same time that the data is written to the compact flash it also sets the bit that triggers the group to read the historical data and write it to the SQL table. Both the write to compact flash and the timing in FSQL for the group is set at the same value 1 Sec. However what we are seeing is that FSQL will read the same data and write the same data twice so you end up having dupe records in the data base. When it does this it sometimes misses the next event record. I am thinking it probably is takeing more than 1 second to read all the data from the PLC and that when the trigger goes high a second time within a two to three second interval that there is not enough time to read the data so it writes whatever was there last. I only think this because we did a study on friday where we ran the machine and recorded all the data. then we dumped the data from the compact flash and checked it with the manual form. This was 100% good. However when we checked it against the SQL table there were flaws.

One other option that I am thinking might work well would be to have FSQL do the time stuff. That 20 char ascii is a date/time stamp and proveds a span of time. Instead of doing it at the PLC just monitor the bits and have some sort of script or something that would do the same thing. This way that amount of data would not require as much wire time. It takes less to read a bit than it does to read a string. Or at least I think so. Not sure. A couple of other areas that read the strings might also work well doing it as a bit and having FSQL convert it. Any thoughts on if this would be better, same, worse, etc…

Just trying to get all data validated so we can continue to move forward with this project.

Hi-

Unfortunately, nothing immediately comes to mind in order to calculate the lag time of data updates. I suppose you could construct some sort of back and forth between the PLC and FSQL to calculate it, but ultimately I’m not sure it would help too much, because FSQL wouldn’t be able to adapt on the fly to the conditions.

In this case, using a block group instead of a normal group wouldn’t likely change anything, assuming you have all of these data points in 1 normal group. Block groups are very efficient at performing many database queries (updates/inserts) at a time, so they are very good at replacing many normal groups that would be writing the same points to the same table. Beyond that, as it relates to the OPC data, there’s not much… all of the points ARE grouped together, so that might help, but generally the OPC server is smart enough to optimize based on all the subscribed tags overall, across all groups, instead of just one particular one.

I guess the biggest thing you could try would be to put a small delay between preparing the data and setting the trigger, and then adjusting FSQL to match this timing. Here’s what I mean, assuming the trigger may occur every 2 seconds or so:

  1. Put a 500ms delay between when you set the registers.
  2. Set your group’s update rate to 500ms. Remember, since it’s running on a trigger, it’s doing very little work in general.
  3. Under Service Settings, make sure the “OPC Update Rate %” is something like 33% or 50%. This means that it will ask for OPC updates at that % of the group’s execution.
    Thus you should be getting 2-3 value updates in that 500ms delay before the trigger, and when the trigger goes high there will only be a max of 500ms before the group logs, thus accomplishing everything in around 1sec, giving you plenty of time before your next trigger.

You can play around with the numbers. Also, I should mention, that there are a range of other factors that could come in here… requesting updates more quickly obviously means the OPC server will poll more quickly, and depending on the processor, protocol, opc server, amount of data, etc. could be OK or not. You’ll just have to try and see. If you implement something like the above and it’s still not stable, it could indicate that lag is a serious issue, and that you’re overwhelming the system (the elements I just described). But we can go from there to the next possible solution…

Regards,

I also wanted to add the following that I noticed I forgot in my orig. posting. What I done in the PLC is I used a set bit, then I dont continue thru the logic till this bit is reset. In the FSQL group I used both the Handshakeing and the reset trigger to zero options. This way once the bit is set the PC software has to reset it. But this still did not help I still get these double entries and such. Now because it does not happen on the compact flash and the time stamps are on the mark I am not 100% sure where to look I am just poking around and trying different things. I talked to Travis and his thoughts were that maybe the bit in the PLC logic is resetting and then setting again which is causing the double entries. Which could be the cause if double entries was all I was getting but I am also getting missed entries and erronous entries.

This is what seems to point to me as being something in the wire time. I mean does it take longer to transfer the information over the network wire (TCP/IP) than it takes to write the information to the compact flash? I know for a fact that the scan time in the PLC is 30mS, I know that it takes roughtly 1 second to write all the data to the compact flash. These are all measurable items in the PLC. The fact that the correct data is in the proper registers, and it is buffered, when it writes to the compact flash tells me that the data is stable. These are just things I am thinking about in an effort to nail down the problem and fix it.

Some of the other things are what if you have items in the software KepServer that are not connected. I had talked to David at KepServer who told me that putting all my devices on their own channel was the correct way to do it so as if one went down it would not have a bad effect on the others. But if say you have all of these other tags that are down and not connected since KepServer will continue to poll them to see when they come back will this affect wire speed? Like I said just trying to nail it down.

Maybe you someone could answer this question. In FSQL setting for a group you have the trigger, and you also have the frequency that the group is inspected. In the trigger section of the manual it says:

If it is selected, the trigger condition must be true for the group to do anything (except run an Action Item that is set to run every update interval).

So if you have the same tag that does the trigger in the group, this would then tell me that it is going to use the update rate to monitor the tag,when this tag goes true, it also sees that it is the trigger and executes the group. However when it executes the group does it just take a current snap shot or does it first run some sort of update query against the OPC then write the data?

Next what would happen if i took the tag that does the trigger, move it to an action item, set the action item to ignore group settings and use as a trigger, then use it as a trigger. How would the update interval affect things within the group? Say the update interval was set to 1 second, does that mean it would take the first one second after seeing the trigger gather the new data then write it or would it again just do a kind of snapshot?

Just curious questions. Thanks again. Have a great day.

To be clear about how the OPC values get updated: they run on a subscription basis. All tags are subscribed the same. Subscriptions work asynchronously, meaning that value updates are sent up from the OPC server at any time, irregardless of how the group is set up.

To put it a different way, if the group is set for 1 second, it says “hey OPC server, I want to know if any of these items (ALL of the items in the group) change in the span of 1 second”.

The OPC server then will send any changes that occur second to second. If no changes occur, nothing is sent. Notice that it is now up to the OPC server to manage how to figure out if the values have changed.

So, the manual is fairly correct in that a group with a trigger that’s not active does nothing besides evaluate run-always action items… however, what it doesn’t mention is that the OPC updates are always received for all of the item.

Bottom line: FactorySQL never explicitly asks the OPC server for item data.

You can see how the following problem might come up: You set all of your data in the plc at one time, including the trigger value. Kepware is polling the plc, finds out that several values have changed, including the trigger… but maybe doesn’t see all of the changes yet. FactorySQL gets the notification that the trigger and some other values have changed, logs them. The next cycle Kepware sees the other changes, and sends them to FSQL… but it’s too late, the trigger was already executed.

Anyhow, I just wanted to throw out some more information for you to digest, hope it helps a bit.

Regards,

[color=red]If you read down thru this you will see my responces. Please let me know if that is a set of correct statements based on your answers.[/color]

[quote=“Colby.Clegg”]To be clear about how the OPC values get updated: they run on a subscription basis. All tags are subscribed the same. Subscriptions work asynchronously, meaning that value updates are sent up from the OPC server at any time, irregardless of how the group is set up.

To put it a different way, if the group is set for 1 second, it says “hey OPC server, I want to know if any of these items (ALL of the items in the group) change in the span of 1 second”.


[color=red]Martin responce - So then my thoughts on it basically are correct in that when the trigger occurs it for lack of a better word takes a snap shot. And that data is sent to the SQL table that that group reports to.[/color]--------------------------------------------------------------------------

The OPC server then will send any changes that occur second to second. If no changes occur, nothing is sent. Notice that it is now up to the OPC server to manage how to figure out if the values have changed.


[color=red]Martin responce - So in this case lets say all these values change at one time, which they do not but we will say that as worst case scenario. Usually though only about 1/2 or less change at any given time. So in this case like I said we will say all of them changed at one time. A good solution would be to know exact how long it has taken for all the items to change and be updated then after that period of time has elapsed and add a little for the just in case, then strobe the trigger of the group.[/color]--------------------------------------------------------------------------

So, the manual is fairly correct in that a group with a trigger that’s not active does nothing besides evaluate run-always action items… however, what it doesn’t mention is that the OPC updates are always received for all of the item.

Bottom line: FactorySQL never explicitly asks the OPC server for item data.

You can see how the following problem might come up: You set all of your data in the plc at one time, including the trigger value. Kepware is polling the plc, finds out that several values have changed, including the trigger… but maybe doesn’t see all of the changes yet. FactorySQL gets the notification that the trigger and some other values have changed, logs them. The next cycle Kepware sees the other changes, and sends them to FSQL… but it’s too late, the trigger was already executed.

Anyhow, I just wanted to throw out some more information for you to digest, hope it helps a bit.

Regards,[/quote]

Yes, that’s right.

Martin, I can’t add much to most of your questions, but I might be able to offer a few suggestions on making your data transfers more deterministic.

I also have a few applications where it is essential that an entire blob of data arrives (or is sent) before it is evaluated. With the Kepware server, it would help a lot if all of your variables are contiguous in your PLC, even if you have to copy them to a temp storage location and address the tags from there. If they are spread out, and you have a bunch of other tags updating elsewhere, then it could easily take two or more writes for all of your tags to be written down to FactorySQL, which is what you are seeing.

Also, once you have all of your bytes together, it might help to use two handshakes, one at the beginning of your variables, and one at the end, and look at both of them to be true before doing anything. This would help the case where only part of your message arrives. It’s kind of a hack, but since OPC servers do their best to pack all contiguous bytes together, it is highly unlikely that the handshakes could be updated without the entire blob being updated.

Another thing to try (I’ve done this in the past) is to take advantage of the way strings are handled with Kepware. Strings are always updated as a single variable, instead of coming down in chunks (when you think about it, it couldn’t work any other way). Kepware also handles arrays, but I don’t think FactorySQL handles them on its end yet. Anyway, if your total data length <= 240 bytes, then you could create a single tag of type “String”, pass all of your variables to it, and then tease it apart with some script in an action item. That is probably as deterministic as you are going to get. (Colby, as a side note, it would be a great feature to be able to handle udts (as arrays) in FactorySQL. Let’s say I had an array of 80 bytes defined as a tag in Kepware. I would be able to drag this single tag into FactorySQL, but then define sub-tags that it maps into. I know a lot of us out there use very structured code, and it would be a big help, not to mention that no one else offers this.).

It has been my experience over the years that playing around with timers to make something work makes me nervous. The same goes for “wire time”. If wire time is a concern, then you are going about it wrong. I would concentrate on getting the data there intact, no matter how long it takes or when it’s evaluated.

S7 - thanks for the info. One thing I can say is that sometime I do not have all the complete information in my postings. It is a problem I am working on. Anyway just as a clarification here are some additional information that needs to be made clear.

  1. I have all my strings for the data in consecutive registers.
  2. I also have all my data for the DINT values in consecutive registers.
  3. I also have all the bits in consecutive registers.

In KepServer I do have the data blocks set at high numbers. I did this with the assist of David from KepServer. I was going to use the array feature of KepServer but was unsure who did and didnot have the required un-array feature. By talking to David at KepServer he told me that since I had all the data fields in consecutive order that by using the data blocks feature was just as good as using the array.

I was over this weekend on KepWares home page and there is a test I found that I am going to try. It says to bring up the system clock. Make a test group, put all your tags for one device in that group, then watch the clock and do an async read. This will read all the data in that group at one time. Kind of a forced method. By doing so it will hang up the system till all the fields are properly read. The system clock is used to see how long it takes. This way I will know how long it takes for Kepware to see all the data changes. This will give me a starting point. Maybe instead of having my trigger for the FSQL group at the begining of my write to compact flash on the PLC maybe moving it to the end, after the 1 second or so that it takes to do that feature then trigger the group. Maybe that will give enough time. I am sure it has to do with time. Just not sure.

In our first attempt we used a PLC with a serial to ethernet convertor. This was slow becuase it was not true ethernet it was serial encapulated. After talking to the vendor of the PLC we found the unit that we are using now. However it is a much faster processor and it is true ethernet. So that makes the communications faster. However I did not fully test the speeds with the new design. In the old design it worked well and I think it was because it took 3 seconds + to do the same compact flash write. So these things are comming back to me now its just that I want to make sure I did not miss anything. This wayas we grow the system we dont run into any more problems.

Hopefully this all makes sence. If not sorry. Have a great day.

Colby - This is more up your alley or at least I think it is but if anyone else knows please feel free to post. In my statements in red above Colby mentioned I am right. So since that is correct here is the big question:

Lets say that I run these KepServer test and I find out it takes 1.1 seconds to update all fields. How does the update rate in FSQL correlate to the time it takes KepServer?

Should the FSQL update rate run same as, faster, or slower than the time it takes KepServer to do the work?

Is there even a bench mark for this or is it more trial and error?

Thanks for any and all info.

[quote=“mrtweaver”]1. I have all my strings for the data in consecutive registers.
2. I also have all my data for the DINT values in consecutive registers.
3. I also have all the bits in consecutive registers.
[/quote]

Ok, but tell me this: Are the strings, dints, and bools all in consecutive registers together? If not, they will not always update at once.

Yes they are, The two 20 char ASCII are in R413, 423, the other ASCII strings follow suit at 433, 435, etc… The DINT registers are marked out just before the ASCII at R384 to R410 and the Boolean are actually stored in registers and called up as individual bit R382.0 - R382.15 and R383.0 to R383.15 I think there is one extra register which is not used inbetween teh DINT and ASCII at R412 but other than that they are all used. This is my buffer area and is also the area which the compact flash gets its data from.

I did so some testing this morning and I am trying to figure out right now how to get into the one section of KepServer. There is one where everything is shown as hex and the numbers correspond to what you get in the OPC diag. screens. That would give me an exact time for how long it takes to do an undate. I can say right now looking at it from an eye perspective it appears to be one second or less. But i would like a more definitive answer.

Continuing on the test.

[quote=“Step7”][quote=“mrtweaver”]1. I have all my strings for the data in consecutive registers.
2. I also have all my data for the DINT values in consecutive registers.
3. I also have all the bits in consecutive registers.
[/quote]

Ok, but tell me this: Are the strings, dints, and bools all in consecutive registers together? If not, they will not always update at once.[/quote]

Martin,

I’m probably going to ask a stupid question, which is common for me, but here it goes:

Why do you need to log data so quickly?

The reason I ask this question is I want to understand what you’re going to be doing with the data. Depending on what you want to accomplish, it may make your life a lot simpler to accumulate statistics on the controller level and then log the statistics instead of the raw data.

For example, let say you want to know the number of items handled by your system. Instead of recording the status of the counter input, you can count the number of items handled within the last batch and then write the count to a register that is logged.

Granted, this might not fit your situation but I would challenge you to think about how and why you’re logging data.

Just food for thought…

Thanks for the input I appreciate it. But it does not fit the requirements of the system as MGMT wanted it. You see what they want to eventually do is to reward good producing employees. What they also want to do is track what is going on at the machine level. This way if a problem is seen early enough it can be reponded to a lot quicker. In our existing system which is antique there is no other data logging other than the counts. So we take it at operators work that they had serious problems. With the new system they can be watched and monitored to make sure they are not goofing off or any such thing, which has been know to happen but can not be properly proved which is why MGMT has request the system do what it is currently capable of. Now as for the speed, well the fastest machine we have here is capable of 5pc per second, so if we start having troubles we can see then early enough. And also by keeping a historical log with time and date stamps we can see how many problems the operator had within a given time frame and how long it took the operator to repair each problem as it came up. We will also log how long it took mechanics to get there how long they were there and such. This is what they wanted so this is what I am developing.

I have all my PLC programs from when I first started designing this system I started out with version A I am now upto like M and I keep old copies and make new ones as changes occur so if a problem comes up I can resort back to older previous version that work till I solve the problem in the new version. This is how many changes I have gone thru thus far. But in my first version it was exactly like you mentioned it kept a log of how many times a certain occurance happened then every 15 min it would dump that data and clear things out. But as the project grew and more people got involved things changed. As you can see. Hopefully this answers your questions.

Have a great day. :smiley:

[quote=“MickeyBob”]Martin,

I’m probably going to ask a stupid question, which is common for me, but here it goes:

Why do you need to log data so quickly?

The reason I ask this question is I want to understand what you’re going to be doing with the data. Depending on what you want to accomplish, it may make your life a lot simpler to accumulate statistics on the controller level and then log the statistics instead of the raw data.

For example, let say you want to know the number of items handled by your system. Instead of recording the status of the counter input, you can count the number of items handled within the last batch and then write the count to a register that is logged.

Granted, this might not fit your situation but I would challenge you to think about how and why you’re logging data.

Just food for thought…[/quote]

I also forgot to say, this is not a stupid question, and i beleive in my dads old sayings, “There is no Such thing as a Stupid question”. It is the best way to learn. That is why I like this forum so much.

BTW I just got my books from Amazon, I got 2 books on understanding and programming Python and 2 books on Expressions. I have my reading cut out but since I like to read so much I dont see a problem. As for me I would much rather surf the net and get answers or read. My daughter on the other hand thinks life revolves around the TV. Oh well so my genes did not transfer to her. Such as life.

[quote=“MickeyBob”]Martin,

I’m probably going to ask a stupid question, which is common for me, but here it goes:

Why do you need to log data so quickly?

The reason I ask this question is I want to understand what you’re going to be doing with the data. Depending on what you want to accomplish, it may make your life a lot simpler to accumulate statistics on the controller level and then log the statistics instead of the raw data.

For example, let say you want to know the number of items handled by your system. Instead of recording the status of the counter input, you can count the number of items handled within the last batch and then write the count to a register that is logged.

Granted, this might not fit your situation but I would challenge you to think about how and why you’re logging data.

Just food for thought…[/quote]

I still stick by my point.

Let’s say throughput is measured by pieces/min. Calculate a running average of pieces/min in the controller. This value can be continuously calculated by the controller and logged at, say, 15 second intervals. This gives you the management level information you need and logging rate and “wire” time aren’t really an issue. I’m not sure there is a practical need to know this kind of information at a higher resolution.

Just tell me if I’m all wet…

P.S. Happy reading!

Not all wet. If you would like you may come by and argue this point. I wholy understand where you are comming from. But in some cases it does better to know where to pick your arguments. Kind of like being married. Sometimes it is best to let the higher ups make the choice then when it flops or they have the epiphany of knowledge since I have the back ward compatible files that do most of what you mention here then I can use them. CYA all the way.

[quote=“MickeyBob”]I still stick by my point.

Let’s say throughput is measured by pieces/min. Calculate a running average of pieces/min in the controller. This value can be continuously calculated by the controller and logged at, say, 15 second intervals. This gives you the management level information you need and logging rate and “wire” time aren’t really an issue. I’m not sure there is a practical need to know this kind of information at a higher resolution.

Just tell me if I’m all wet…

P.S. Happy reading![/quote]