Need Help Tuning a Query

I need help re-writing a query - This particular query sometimes runs super slow, other times runs in milli-seconds. But it’s been a major pain, I’m not against restructuring tables and no single table has more then 5000 rows in it.

The tables involved.

[code]CREATE TABLE domain_availability_monitoring (
ID int(11) NOT NULL AUTO_INCREMENT,
C_ID int(11) NOT NULL,
monitoring_schedule_id bigint(20) NOT NULL,
site_id int(11) NOT NULL,
monitoring_type_id int(11) NOT NULL,
last_scanned datetime DEFAULT NULL,
expiration_date datetime DEFAULT NULL,
raw_text text,
ls_able_to_parse bit(1) DEFAULT b’1’,
whois_id int(11) DEFAULT NULL,
PRIMARY KEY (ID),
KEY fk_domain_availability_monitoring_customers1 (C_ID),
KEY fk_domain_availability_monitoring_monitoring_schedule1 (monitoring_schedule_id),
KEY fk_domain_availability_monitoring_customers_sites1 (site_id),
KEY fk_domain_availability_monitoring_lu_da_monitoring_type1 (monitoring_type_id),
CONSTRAINT fk_domain_availability_monitoring_customers1 FOREIGN KEY (C_ID) REFERENCES customers (C_ID) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_domain_availability_monitoring_customers_sites1 FOREIGN KEY (site_id) REFERENCES customers_sites (site_id) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_domain_availability_monitoring_lu_da_monitoring_type1 FOREIGN KEY (monitoring_type_id) REFERENCES lu_da_monitoring_type (monitoring_type_id) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT fk_domain_availability_monitoring_monitoring_schedule1 FOREIGN KEY (monitoring_schedule_id) REFERENCES monitoring_schedule (id) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=2413 DEFAULT CHARSET=latin1$$

CREATE TABLE lu_time_zones (
id int(11) NOT NULL AUTO_INCREMENT,
offset decimal(3,1) NOT NULL,
name varchar(45) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY name_UNIQUE (name)
) ENGINE=InnoDB AUTO_INCREMENT=427 DEFAULT CHARSET=utf8$$

CREATE TABLE customers (
C_ID int(11) NOT NULL AUTO_INCREMENT,
C_Email varchar(80) NOT NULL,
C_Date_Added timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
C_UniqueKey varchar(38) NOT NULL,
C_element_id char(36) DEFAULT NULL,
C_Last_Login timestamp NULL DEFAULT NULL,
C_Registering_IP varchar(45) DEFAULT NULL,
C_TimeZoneID int(11) DEFAULT ‘324’,
C_AdjustForDST tinyint(4) NOT NULL DEFAULT ‘0’,
PRIMARY KEY (C_ID),
UNIQUE KEY C_Email_UNIQUE (C_Email),
KEY fk_customers_lu_time_zones1_idx (C_TimeZoneID),
CONSTRAINT fk_customers_lu_time_zones1 FOREIGN KEY (C_TimeZoneID) REFERENCES lu_time_zones (id) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=4763 DEFAULT CHARSET=latin1$$

CREATE TABLE customers_sites (
site_id int(11) NOT NULL AUTO_INCREMENT,
C_ID int(11) NOT NULL,
url_or_ip varchar(255) DEFAULT NULL,
have_admin_access bit(1) DEFAULT b’0’,
active bit(1) DEFAULT b’0’,
ts_site_seal_approved date DEFAULT NULL,
ssl_enabled bit(1) DEFAULT b’0’,
PRIMARY KEY (site_id,C_ID),
KEY fk_customers_sites_customers1 (C_ID),
KEY idx_cs_url_ip (url_or_ip,ts_site_seal_approved),
CONSTRAINT fk_customers_sites_customers1 FOREIGN KEY (C_ID) REFERENCES customers (C_ID) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=3565 DEFAULT CHARSET=utf8$$
[/code]

The Query. It returns the results I’m looking for - It just needs to be tuned, rewritten, or something else.

[php]SELECT dam.id, dam.expiration_date,cs.url_or_ip, ms.notify_who_email,
dam.raw_text, ms.frequency_in_days, dam.monitoring_type_id
FROM domain_availability_monitoring dam,
customers_sites cs,
monitoring_schedule ms,
customers c,
lu_time_zones tz
where dam.monitoring_schedule_id = ms.id and cs.site_id = dam.site_id and cs.active =1 and c.c_id = cs.c_id and tz.id = c.c_timezoneid
and ((dam.monitoring_type_id in (2,3)
and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) >= 0 or dam.last_scanned is null)
or (dam.monitoring_type_id = 1 and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) > 0
and (TIMESTAMPDIFF(MINUTE, date_sub(dam.expiration_date, INTERVAL ms.frequency_in_days DAY), UTC_TIMESTAMP() )) >= 0
or dam.expiration_date is null))[/php]

Your create tables sql is bad. You need to JOIN the tables. Thats why it is slow.

These should be joins, not AND’s

dam.monitoring_schedule_id = ms.id

AND cs.site_id = dam.site_id

AND c.c_id = cs.c_id
AND tz.id = c.c_timezoneid

I’m not against switching it to “ANSI” sql vs “Conventional” sql “Testing it to see if the results differ over time” - But I fail to see how that would make a difference in optimization. I always looked at is as personal taste.

You said it sometimes runs slow, other times it runs quickly? Have you determined when the slowdown occur? I would think the query itself was not the issue if it is only occasionally slow. But I would also check the indexing for the individual tables as well.

[member=72272]astonecipher[/member], if query caching is turned on it will fool you. It will run faster when it pulls the query from cache and slower when it doesn’t. The tables need to be joined.

If I restart MySQL - I can’t produce the results (Sometimes it just hangs for a while and eventually finishes) -

It never hangs when I’m testing it - Only does it when it run’s naturally every 2 hours - then one day of the week it will just freak out and kill the CPU - Rest of the time it’s fine without issue.

When I put it into ANSI SQL

[php]SELECT dam.id, dam.expiration_date,cs.url_or_ip, ms.notify_who_email,
dam.raw_text, ms.frequency_in_days, dam.monitoring_type_id
FROM domain_availability_monitoring dam
INNER JOIN monitoring_schedule ms ON dam.monitoring_schedule_id = ms.id
INNER JOIN customers_sites cs ON cs.site_id = dam.site_id
INNER JOIN Customers c ON c.c_id = cs.c_id
INNER JOIN lu_time_zones tz ON tz.id = c.c_timezoneid
where cs.active =1
and ((dam.monitoring_type_id in (2,3)
and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) >= 0 or dam.last_scanned is null)
or (dam.monitoring_type_id = 1 and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) > 0
and (TIMESTAMPDIFF(MINUTE, date_sub(dam.expiration_date, INTERVAL ms.frequency_in_days DAY), UTC_TIMESTAMP() )) >= 0
or dam.expiration_date is null))[/php]

Then I select the explain plan - between the two - They are identical. (Which is what I expect, ansi vs non-ansi shouldn’t have any impact on performance) As you can see there is only 1176 rows in the main table that get filter down - that’s such a small number and I think I have everything indexed right.

1 SIMPLE dam ALL fk_domain_availability_monitoring_monitoring_schedule1,fk_domain_availability_monitoring_customers_sites1,fk_domain_availability_monitoring_lu_da_monitoring_type1 1176 Using where 1 SIMPLE ms eq_ref PRIMARY PRIMARY 8 mydb.dam.monitoring_schedule_id 1 Using where 1 SIMPLE cs ref PRIMARY,fk_customers_sites_customers1 PRIMARY 4 mydb.dam.site_id 1 Using where 1 SIMPLE c eq_ref PRIMARY,fk_customers_lu_time_zones1_idx PRIMARY 4 mydb.cs.C_ID 1 1 SIMPLE tz eq_ref PRIMARY PRIMARY 4 mydb.c.C_TimeZoneID 1 Using index

I really unsure of why it randomly chokes.

The tables are joined Kevin, you’re just an “ansi coder” vs a “non-ansi” coder - It shouldn’t matter explicit vs implicit joins.

I think I have this figured out, apparently, I was using a table in the past (lu_time_zones) and found a better way to do it and never removed it from the query as you can see none of the fields from that table was being used. This also enabled me to remove the customers table as well (since that was only needed to join to the time zone table.)

So by removing the these two tables, it has to make it faster. Hopefully, that fixes it. Only time will tell.

[php]SELECT dam.id, dam.expiration_date,cs.url_or_ip, ms.notify_who_email,
dam.raw_text, ms.frequency_in_days, dam.monitoring_type_id
FROM domain_availability_monitoring dam,
customers_sites cs,
monitoring_schedule ms
where dam.monitoring_schedule_id = ms.id and cs.site_id = dam.site_id and cs.active =1
and ((dam.monitoring_type_id in (2,3)
and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) >= 0 or dam.last_scanned is null)
or (dam.monitoring_type_id = 1 and TIMESTAMPDIFF(MINUTE, date_add(dam.last_scanned, INTERVAL ms.frequency_in_days DAY),UTC_TIMESTAMP()) > 0
and (TIMESTAMPDIFF(MINUTE, date_sub(dam.expiration_date, INTERVAL ms.frequency_in_days DAY), UTC_TIMESTAMP() )) >= 0
or dam.expiration_date is null))
[/php]

Well that didn’t fix it…

The status that shows up in mysql is “Writing to net” and it’s always on this query. When I kill the connection the CPU goes back to normal.

Maybe it’s not a query thing, maybe it’s something else.

The problem is clearly M-i-c-r-o-s-o-f-t

The answer is L-i-n-u-x

I agree :stuck_out_tongue:

I think I know what the issue is now - after more research (Really doesn’t make sense to me). But I can code around it.

The query runs and executes fine - I think looping through the recordset - And do some processing based on the values of the recordsets (That can take a while - since I’m calling out to other servers for information). During this time - It’s stuck in “Writing to Net” status. Eventually, it finishes and everything goes back to normal.

Why does slowly looping through a query results cause “MySQL Process to consume large amounts of CPU” - I don’t have an explanation for that.

But what I plan to do is used a disconnected Record set - or just load the information into an array and loop through either one of those objects. This should stop MySQL from consuming CPU.

Might look into threading as well. http://php.net/manual/en/function.pcntl-fork.php not on the same level as .NET, but it might help

Sponsor our Newsletter | Privacy Policy | Terms of Service