steveren
Newbie
Posts: 24
|
|
« on: April 30, 2014, 01:37:47 AM » |
|
I've was running 2014.137 until recently when I upgraded to .210 (and subsequently .213). Everything is much slower with these newer versions.
The reason seems to be excessive disk I/O. I note that simply placing a roughly 2MB image on a page causes hundreds of MB of disk reads! I've seen the same thing on two different Windows installations (Win7 32 and 64-bit) and also in Linux under Wine. Even deleting an image from the page, which shouldn't involve reading any data, causes vast amounts of activity.
I'm reverting to .137 for now!
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #1 on: April 30, 2014, 02:03:09 AM » |
|
It's not putting the image on the page that is causing disk I/O. You've got something else going on. Either it is building thumbnails or raw cache at the same time or something else is causing the disk I/O.
Mike
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #2 on: April 30, 2014, 02:29:03 AM » |
|
Sorry, that's not the case. I've watched what happens with both Windows and Linux monitoring tools and it's perfectly clear that the disk I/O is in direct response to my actions. The exact same actions in .137 are fine. It is consistent and repeatable across three very different environments: real PC running Win7/32; VM running Win7/64; PC running Linux+Wine. If the images are on a remote drive, I see huge network transfers, not local disk.
Whatever is happening, it's absolutely real.
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #3 on: April 30, 2014, 02:39:00 PM » |
|
210 and 211 have the exact same amount of disk reads as 137: as checked using Windows Process Explorer. Now starting with 212 (and 213) there will be more disk reads because of the extended EXIF info. I can probably optimize that to use less disk I/O but you shouldn't have to go back prior to 211 to get the same speed and disk I/O as 137.
I assume you are only worried about this WRT network disk I/O as you'd never notice the difference on a local hard drive.
Mike
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #4 on: April 30, 2014, 04:20:53 PM » |
|
Perhaps I'm not making myself clear. This isn't a minor difference due to reading EXIF data for new functionality, this is a massive difference which can only be due to a program bug. Obviously not one that shows up in all cases, but with the job I'm trying to do, it's absolutely consistent. This is sluggish to the point of near-unusability even from a local disk, and with a GbE network connection to an SSD or a virtual network connection that's actually local physically, performance isn't much different (good or bad). I'm talking several seconds for almost any operation, no matter how simple, such as just removing an image from the queue! So if I: - Run the .137 installer; start QIU; load my job; start working on it; everything is great.
- Run the .21x installer, overwriting the .137 installation; it's treacle.
- Re-run the .137 installer; back to normal.
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #5 on: April 30, 2014, 05:53:12 PM » |
|
Please give 2014.214 a try. I've optimized the disk I/O and it offers even less disk I/O than the older 137. If you still have a problem with 2014.214 then we'll need to look elsewhere like memory problems causing a swap file, anti-virus interference, or the like. In the case you still see the problem, please provide the type of machine, OS you are running, and amount of available memory: you can see that by clicking "Help" in Qimage Ultimate and then holding the shift key while you click "Analyze Current Settings".
Regards, Mike
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #6 on: April 30, 2014, 11:16:09 PM » |
|
Will do, but seriously, this is not swapfile thrashing or anything like that. I'm a software developer myself and I know a program doing something grossly wrong when I see it :-)
When I see hundreds of megabytes - this is not an exaggeration - being read simply to open a 2MB image with no other stress factors on the machine, that is not correct behaviour!
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #7 on: May 01, 2014, 12:26:17 AM » |
|
Ah, 214 is better, but still not good. It's a kind of treacle vs molasses comparison :-( However, I noticed it was pretty snappy before I loaded my job, so it looks as though it might be spuriously re-reading every file in the job queue on every operation! Anyway, here's a link to a screen capture video showing exactly what I'm talking about. I do the exact same things in 137, 213 and 214 and you can see that 137 is instant, 213 is a slug on valium, and 214 is the slug on valium after it's had a cup of coffee :-) http://stvren.webspace.virginmedia.com/QImage%20problem%20demo.avi
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #8 on: May 01, 2014, 01:15:16 AM » |
|
And here are some Sysinternals Process Monitor logs of the drag-to-page operation. It's interesting that the 213 log is about 54MB and the 214 log just over 60, despite 213 being noticeably slower. OTOH, the 137 log is 4 - four! - MB. That's an awful lot of I/O that wasn't happening before. Some of the issues are obvious, such as file I/O on every mouse move event when the pointer is over a thumbnail, but that'll mostly be cached by the OS and fast enough not to be a problem on a modern system. Other stuff I don't understand at all, such as the many attempts to access the offline files cache (c:\windows\csc) when I've got offline files disabled. http://stvren.webspace.virginmedia.com/ProcmonLogs.zip
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #9 on: May 01, 2014, 02:49:47 AM » |
|
Will do, but seriously, this is not swapfile thrashing or anything like that. I'm a software developer myself and I know a program doing something grossly wrong when I see it :-)
When I see hundreds of megabytes - this is not an exaggeration - being read simply to open a 2MB image with no other stress factors on the machine, that is not correct behaviour!
It's only "incorrect behavior" if it causes an issue with operation and the issue on yours seems quite exaggerated. Trolling every byte that goes in an out of the hard drive may be helpful information but it is not the cause of the speed problem. It may be a clue, but it isn't the problem. Even a slow system can read 25 MB of data in a fraction of a second. Even with your job of over 100 prints, 214 is only reading a couple dozen megabytes (not several hundred) when you add a new image, mostly reading only thumbnails to determine changes... and all of those should be cached in the disk cache. So I ask again, what OS are you running and on what type of machine. I'll try to further optimize things but on my system, it isn't taking nearly as long to do those operations so at this point, I just want to figure out whether I'm putting wings on a dinosaur (a Pentium 4 running XP) or if yours is a reasonably modern system: that would help me determine the severity of the problem and where to look in the code. It looks like Win7 at least with 4 CPU's but I notice you are using the Y drive. I have no idea what type of drive that is: if it's a USB 2.0 drive or something slow, I may have to take that into account too in what I'm doing. BTW every version since the beginning has disk I/O as you run your mouse over the thumbnails: it is reading the header of those images to display EXIF information in the hotbar at the bottom. Mike
|
|
« Last Edit: May 01, 2014, 02:54:44 AM by admin »
|
Logged
|
|
|
|
admin
|
|
« Reply #10 on: May 01, 2014, 12:31:43 PM » |
|
BTW no argument that 200 is slower when you get into larger print jobs (over about 100), I just don't attribute it to disk I/O. I just did a job of about 250 prints and did some diagnostics on where the code is spending most of its time by putting in timers. Turns out that disk I/O contributed very little but the most time was being spent decoding of the JPEG thumbnails. When I replaced the JPEG thumbnails with uncompressed BMP images (still from the disk), things were nearly instantaneous even though it was using even more disk reads for the larger size BMP files. Obviously that's not going to be my solution but I'll look later to see what I can do about using another method to check page formatting rather than using the thumbnails to discover size and cropping.
Mike
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #11 on: May 01, 2014, 08:43:27 PM » |
|
v2014.115 corrects the slow-down with larger jobs (many prints).
Mike
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #12 on: May 01, 2014, 09:21:18 PM » |
|
Ok, I've analysed the procmon logs and I now know more about what's going on, if not why.
This is for a simple drag'n'drop from the folder thumbnail listing to a new blank page at the end of an existing multi-page job. All files are on the local drive and I'm only looking at file read events.
137: the file being placed on the page is read. The process is horribly inefficient, though, as it involves over 2,200 separate read operations, almost all of which are for only a single byte, and many bytes are re-read twenty-five times! Yes, of course, most of those will come from OS cache, but it's still a hell of an overhead calling into the kernel like that rather than maintaining a single in-memory copy.
214: starts much as 137 although the exact pattern of reads is slightly different and actually comes to about 2,400 operations. However, it then goes on to issue 179 reads for every file in the job queue again mostly reading one byte at a time. With 64 images in the queue, this adds up to over 11,500 ops! Remember that I'm appending to the end of the queue on a blank page: nothing whatsoever has changed that needs re-reading.
But it gets worse. I edit the page and change the picture location, just typing a number into one of the position boxes.
137: It just moves the picture.
214: Over 23,000 read ops! Again the whole queue mostly one byte at a time.
Sorry, Mike, but that's a bug in my book. I haven't even looked at other I/O ops than just reading my data files, but there's clearly something wrong.
|
|
|
Logged
|
|
|
|
steveren
Newbie
Posts: 24
|
|
« Reply #13 on: May 01, 2014, 09:28:56 PM » |
|
Forgot to add that I compared 213 as well, although that's water under the bridge now. It was basically similar to 214 except that it was also reading the entire image data for the queue, not just what is I presume the EXIF metadata. That explains the 'hundreds of MB' observation and why 213 was even slower than 214.
|
|
|
Logged
|
|
|
|
admin
|
|
« Reply #14 on: May 01, 2014, 09:32:19 PM » |
|
First, you are making many assumptions about what needs to be done and what you think should be done. Your heuristics are also wrong. There is nothing that is reading one byte at a time and certainly not the same byte over and over: your report may show that but it is inaccurate. There are also times that QU saves undo information after certain operations.
Second, none of that matters in 215 as it is efficient and there are no unreasonable delays. Even with hundreds of images in the queue, performing an operation only results in a few MB being read from the HD. There are times when you add an image or move an image that can affect pages other than the one you are working on and I'm certainly not going to rewrite the program to build special cases that optimize things like freehand or single page editing when it'll only save maybe 4 MB of disk reads (reads that have already been cached into RAM) and a quarter second. So 215 is both efficient and manageable from a code standpoint.
Mike
|
|
« Last Edit: May 01, 2014, 09:46:39 PM by admin »
|
Logged
|
|
|
|
|