As of 5:00 PM PST everything has been completely idle across the
servers. All delayed items have been processed. We'll be keeping a
close eye on the server in the morning when traffic picks back up,
but right now we don't expect the kind of slowness we saw
We've been noticing some of the slowness as well. Are you guys
running some sort of backup or analytics job around our European
morning? :) Currently in GMT and around 10 it's been very slow for
the past few days.
Yesterday was unrelated. But yes, Amazon EC2 backs up at that
time and we're seeing elevated response times. Tonight EC2 was
having issues for approximately 22 minutes during that backup
window where the servers respond slow.
We're working on placing measures to prevent the backup from
causing the site to runs low.
I started my morning by viewing unqueued discussions in the In
Box. My process: click on a discussion in the unqueued list, assign
it to a queue, and click the Admin link to return to the dashboard.
The first time I did this, it took maybe 30 seconds for the
dashboard to load after viewing a specific discussion. The second
time: 500 error.
It occurred to me that I've only recently started using the
category/queue filtering extensively, and thought perhaps that may
have something to do with the performance and errors, Sure enough -
when I filter by category=In Box, queue=unqueued, the dashboard
loading problems are persistent. When my filters reflect
category=In Box, queue=All, the problem doesn't happen.
So it appears that I had two things going on yesterday, your
server slowdowns + what appears to be a performance issue related
to filtering by queue, which I suspect may be related to: