Synch time based groups

Hi,

I have a problem that I am trying to resolve maybe someone has some idea. I am using FactorySQL for logging datapoints to SQL server using Stored Procedure groups. There are a total of 1093 datapoints each datapoint is its own group. The update rate is 20 secs.
I am passing to the store procedure:

DateTime - Using the following action Item DateFormat(Now(),“MM/dd/yyyy HH:mm:ss”)
DatapointId - Number that Identify the point
Value - OPC Item Value

The problem that I am having is that I need to have the same timestamp for all my values that i am logging. Some of the datapoints are of by a sec and for the application it is very critical that they match. How is the order of exec in FSQL if I have all the groups inside a folder. Does it triggers them in the same order they are display. Can I some how use an action item that retains its value so I will trigger it first to capture the time and use its value in all the other groups.

This application is a conversion from RSSQL to FSQL in RSSQL the trigger is the same based on 20 Sec from the start of RSSQL until it is stop and all the 1093 points have the same timestamp.

Thanks

jdelgado,

The IA folks will probably correct me if I’m wrong but there is no way to guarantee an identical time stamp amongst different groups. The only way to guarantee the identical time stamp is to have the tags in the same standard group.

If you need to combine the logged values from multiple standard groups/tables you can use joins with “loose” time criteria such as this (assuming you’re using MS SQL Server):

SELECT t1.t_stamp, col1, col2, ..., colN FROM tablename1 AS t1 INNER JOIN tablename2 AS t2 ON DATEDIFF(ss, t1.t_stamp, t2.t_stamp) < 5

If you have to join many tables, the join is going to get very long and very cumbersome very quickly.

With your layout, you’ll have to do a self join, something along this route:

SELECT t1.t_stamp, t1.col1, t2.col1, t3.col1 FROM tablename AS t1 INNER JOIN tablename AS t2 ON DATEDIFF(ss, t1.t_stamp, t2.t_stamp) < 5 INNER JOIN tablename AS t3 ON DATEDIFF(ss, t1.t_stamp, t3.t_stamp) < 5

This obviously gets very cumbersome very quickly too. You may also need to add some additional indexes on t_stamp to have acceptable query times.

I know that you’re not using the standard group layout but you might want to consider changing your architecture to use the standard groups.

I had used some other competing products that use a similar database schema to yours before I started using FSQL and my natural inclination was to continue along that vein. However, I quickly made the switch over to a “wide” table structure and haven’t regretted it. For most of my purposes, it so much easier to work with the standard group schema than with the “narrow” schema.

Hi,

First off, the organization of groups into folders doesn’t affect execution at all. The groups are all scheduled & execute independently. Some time in the future we will likely add the ability to execute multiple groups together, however, even if that currently existed, I’m not sure it’s really the best solution in this case.

To me, it sounds like it would be better to adjust the mechanism of how this works and find a way to get all points into one group. A block group could be a good option, with a column for the point id, and a column for the value. Then, when the items have been written, you could trigger a stored procedure to run which would examine the table in order to get the data.

I’m dramatically over simplifying, of course, but it seems like it could be done. The other option, to keep all of the groups and try to synchronize execution, probably wouldn’t be much easier to set up, but probably could be done.

You could have 1 standard group (the “execution coordinator”), that did not interact with the database. It would run every 20 seconds, and have one action item which called StoreVariable(“time”,CurrentDateTime())

Each stored procedure group would be set to run at 1 second (or a little slower, maybe 5). It would have a run-always action item which called GetVariable(“time”, null). This would be the group trigger (on change). Then, instead of using the “store timestamp” option, there would be another action item (on-trigger) which would simply reference the trigger action item, and write the value to the time parameter. In this way, all of the groups will be triggered when the coordinator sets the new time, and all of the groups will use that timestamp.

Hope this helps,

Hi Mickey,

I understand what you mean but the narrow format allows the flexibility of adding new data points without having to modified the database schema. In our application we allow the users to select the datapoints to trend by Plant, Line, MCC … since the datapoint id is link to other tables allowing us populate tree controls showing the points available for them to select and safe as templates to load dynamically. We also interface to other databases to merge quality results with process data and the used of this datapoint ids are handy. I know with flexibility comes complexity.

I wish I could find an easier solutions, because yes narrow can be a pain since we have to dynamically generate crosstab queries to populate our trends.

[quote=“MickeyBob”]jdelgado,

The IA folks will probably correct me if I’m wrong but there is no way to guarantee an identical time stamp amongst different groups. The only way to guarantee the identical time stamp is to have the tags in the same standard group.

If you need to combine the logged values from multiple standard groups/tables you can use joins with “loose” time criteria such as this (assuming you’re using MS SQL Server):

SELECT t1.t_stamp, col1, col2, ..., colN FROM tablename1 AS t1 INNER JOIN tablename2 AS t2 ON DATEDIFF(ss, t1.t_stamp, t2.t_stamp) < 5

If you have to join many tables, the join is going to get very long and very cumbersome very quickly.

With your layout, you’ll have to do a self join, something along this route:

SELECT t1.t_stamp, t1.col1, t2.col1, t3.col1 FROM tablename AS t1 INNER JOIN tablename AS t2 ON DATEDIFF(ss, t1.t_stamp, t2.t_stamp) < 5 INNER JOIN tablename AS t3 ON DATEDIFF(ss, t1.t_stamp, t3.t_stamp) < 5

This obviously gets very cumbersome very quickly too. You may also need to add some additional indexes on t_stamp to have acceptable query times.

I know that you’re not using the standard group layout but you might want to consider changing your architecture to use the standard groups.

I had used some other competing products that use a similar database schema to yours before I started using FSQL and my natural inclination was to continue along that vein. However, I quickly made the switch over to a “wide” table structure and haven’t regretted it. For most of my purposes, it so much easier to work with the standard group schema than with the “narrow” schema.[/quote]

Thanks Colby for the response, which one you recommend the block or the synch.

[quote=“Colby.Clegg”]Hi,

First off, the organization of groups into folders doesn’t affect execution at all. The groups are all scheduled & execute independently. Some time in the future we will likely add the ability to execute multiple groups together, however, even if that currently existed, I’m not sure it’s really the best solution in this case.

To me, it sounds like it would be better to adjust the mechanism of how this works and find a way to get all points into one group. A block group could be a good option, with a column for the point id, and a column for the value. Then, when the items have been written, you could trigger a stored procedure to run which would examine the table in order to get the data.

I’m dramatically over simplifying, of course, but it seems like it could be done. The other option, to keep all of the groups and try to synchronize execution, probably wouldn’t be much easier to set up, but probably could be done.

You could have 1 standard group (the “execution coordinator”), that did not interact with the database. It would run every 20 seconds, and have one action item which called StoreVariable(“time”,CurrentDateTime())

Each stored procedure group would be set to run at 1 second (or a little slower, maybe 5). It would have a run-always action item which called GetVariable(“time”, null). This would be the group trigger (on change). Then, instead of using the “store timestamp” option, there would be another action item (on-trigger) which would simply reference the trigger action item, and write the value to the time parameter. In this way, all of the groups will be triggered when the coordinator sets the new time, and all of the groups will use that timestamp.

Hope this helps,[/quote]

Well, I guess both are legitimate options, depending on how much flexibility you have with your system design.

The block group approach will definitely result in less load on the system. Having 1 group instead of 1093 is going to be MUCH lighter on the system. It’s also lighter on the database, since you’re only writing the data in one (or a few) transactions, and not in 1000+.

HOWEVER, this doesn’t mean the system can’t handle 1000+ groups, so if you really can’t change the work flow at all, the synchronized execution method should work. The first method, with a block group, also presents some challenges- block groups don’t support polled reads, so you’ll need some way of detecting when all rows are written- assuming that all values change each time, this won’t be too bad, but if some values don’t change it could be a bit difficult.

If using polled reads with the stored procedure groups, each triggered execution could take hundreds of milliseconds (100-500 usually). Times 1093, that’s a lot of work (over a minute). It gets split up a bit, but could still prove difficult to manage.

Ultimately there are lots of considerations. You’ll have to evaluate them and see which one makes more sense. Please feel free to ask any other questions that come to mind, though.

Regards,