MySQL Sql file restore

I have done a sql dump from the production server. I took the file to a test server to restore. It restored all tables quick except these history tables. Here is the start of the sql file. There are 10 of these, some have 500,000 records in them. The test server has been working on them for 24+ hours now. It is not a hardware issue. What would cause this and is there a remedy to make it quicker?

[code]SET FOREIGN_KEY_CHECKS=0;


– Table structure for sqlt_data_1_2010_06


DROP TABLE IF EXISTS sqlt_data_1_2010_06;
CREATE TABLE sqlt_data_1_2010_06 (
tagid int(11) default NULL,
intvalue bigint(20) default NULL,
floatvalue double default NULL,
stringvalue varchar(255) default NULL,
datevalue datetime default NULL,
dataintegrity int(11) default NULL,
t_stamp bigint(20) default NULL,
KEY sqlt_data_1_2010_06tagidndx (tagid),
KEY sqlt_data_1_2010_06t_stampndx (t_stamp)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;


– Records of sqlt_data_1_2010_06


INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, null, null, null, null, ‘32’, ‘1276018332926’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, null, null, null, null, ‘500’, ‘1276018333193’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018334695’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘500’, ‘1276018336457’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018337459’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘500’, ‘1276018339878’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018340879’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘500’, ‘1276018341380’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018341630’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘500’, ‘1276018342631’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018344134’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘500’, ‘1276018344635’);
INSERT INTO sqlt_data_1_2010_06 VALUES (‘1’, ‘44’, null, null, null, ‘192’, ‘1276018346136’);[/code]

I have found I did not need these tables. I am still concerned if a person was to need to restore those files it looks to be a very long process. These history files were made from the history option right on the tag not from a transaction group. The tables had no primary key either :scratch: .

Well, I think the big problem is how MySQL goes about restoring data. If I recall correctly, it’s running each query as its own transaction, which if accurate, means you could speed it up 100x by managing the transactions better.

I know that we have a tool or two around here somewhere that actually does this, as we’ve run into this issue as well in the past (with other types of history, not tag history). Unfortunately we’ve never packaged it up and made it available… but maybe I can track it down.

In regards to the primary key- you’re right, these tables don’t have a primary key. They should have an index on the tag id and timestamp, as those are what are used to look up the values, but primary key implies unique, and while the combo of tag id and timestamp probably is unique, we haven’t defined it like that.

Regards,

Sounds like I need to encourage them from using tag history.

In looking at this database now I see a few tables I wonder about. Those being:

sqlth_drv ( Database driver list?)
sqlth_partitions ( range of history tables?)
sqlth_sce (?)
sqlth_scinfo ( List of scan classes?)
sqlth_te ( Tag info?)

Are these just remnants from them/users creating history of them tags? Can these safely be removed?

Yes, those tables are all used by SQLTags History. If you're not using them, you can delete them.

I feel like I'm missing something, but sure- you don't have to use SQLTags history. Transaction groups are also used for history, and the two methods have different results. You'll just have to look at the best option for what you're trying to accomplish. One phenomenon that we've observed is that people get caught up on the name and think that SQLTags History is the only way to do history in Ignition, which is definitely not true.

That said, there are probably other ways to attack the problem you originally posted about, if functionally you want to keep using SQLTags History. First, from the db side of things, using raw backups (simply copying the data files) instead of logical backups (generating SQL) would mean that restoring would require the tables to be parsed again. But more fundamentally, I didn't look very closely at the actual data contained in the queries of your post. There seems to be two things going on: 1) two systems, one licensed, one not, logging to the same tables. This happens frequently with backup and restores across machines and can lead to strange results. 2) The same value appears to be logged multiple times, which shouldn't happen. Basically, what I'm getting at is that you might simply have way more data than you need. The dual-source issue can be resolved by disabling the second server (or changing it's "system name"). The repeat data issue, I suspect, has probably been fixed on our side.

Hope this helps,

Thanks Colby, :prayer:

Sorry, I was wrong about my analysis of the data. Much simpler than what I was describing: the value was simply bouncing between “good” quality and “stale” quality, causing it to log again. This would be caused, most likely, by driver timeouts, which definitely should be a bit improved since that time.

If you would like to recover that data, I’m sure I could whip up a python script to clean it up a bit. If you’re mostly just worried about the future, no problem, and hopefully we can help you figure some better strategies.

We are fine Colby.

Thanks,
Jim