Mike Chaney's Tech Corner
November 23, 2024, 07:19:34 PM *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News: Qimage registration expired? New lifetime licenses are only $59.99!
 
   Home   Help Login Register  
Pages: [1]
  Print  
Author Topic: Compressed-sensing algorithm  (Read 14748 times)
Ernst Dinkla
Sr. Member
****
Posts: 410


Email
« on: March 02, 2010, 02:45:11 PM »

Another upsampling algorithm for Qimage?

http://www.wired.com/magazine/2010/02/ff_algorithm/all/1



met vriendelijke groeten, Ernst Dinkla

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/
Logged
admin
Administrator
Forum Superhero
*****
Posts: 4220



Email
« Reply #1 on: March 02, 2010, 02:59:58 PM »

Thanks.  I'll take a look.  Looks a little like "vaporware", especially since the example (Obama) is certainly fabricated.  The problem with algorithms that claim to get that kind of result is that there simply is not enough data in the original to derive the result.  Hence the fabrication of the result in their example.

Mike
Logged
rayw
Sr. Member
****
Posts: 440


« Reply #2 on: March 02, 2010, 03:32:57 PM »

Hi Ernst,

Interesting article, in particular the comments. Ignoring the 'Obama example images, which do not illustrate the process, I think the general principle would give similar results to existing processes for the sort of photography we are involved in, but may work well for particular types of images and the way in which that image data is collected. As I mentioned, the comments are more interesting (and realistic). A horse is a four legged animal, but not all horses have four legs (or is it 'but not all animals are horses?')

Best wishes,

Ray
Logged
admin
Administrator
Forum Superhero
*****
Posts: 4220



Email
« Reply #3 on: March 02, 2010, 05:18:59 PM »

I think the general principle would give similar results to existing processes for the sort of photography we are involved in, but may work well for particular types of images and the way in which that image data is collected.

That was my initial reaction.  I believe we've gone about as far as we can go with photographic interpolation algorithms.  Some are a little sharper than the "hybrid" method in Qimage, but generally when you try to shoot for very sharp edges in a "random" photograph like some of the sharper algorithms do, you end up with a photo that looks a little more like a charcoal drawing than a photo.  The reason is that you've broken the relationship between sharpness and image size because there simply isn't enough data in the original to determine whether or not those edges should really be that sharp.  It's a balance, really, but you can't get something for (from) nothing.  The super-sharp edge detection algorithms do work well for non-photographic images like computer graphics (pie charts, screen shots, geometric 3D renderings, etc.).

I also have to say that there's less need for the super-refined interpolation algorithms these days than in the past.  Most people have cameras that have enough pixels to do the job.  The emphasis now is on proper sizing for printing so that you get the most of the pixels you do have, regardless of how many.  Of course, that's what the smart sharpening in Qimage is designed to do.

Mike
Logged
Ernst Dinkla
Sr. Member
****
Posts: 410


Email
« Reply #4 on: March 02, 2010, 09:07:26 PM »

I think the general principle would give similar results to existing processes for the sort of photography we are involved in, but may work well for particular types of images and the way in which that image data is collected.

That was my initial reaction.  I believe we've gone about as far as we can go with photographic interpolation algorithms.  Some are a little sharper than the "hybrid" method in Qimage, but generally when you try to shoot for very sharp edges in a "random" photograph like some of the sharper algorithms do, you end up with a photo that looks a little more like a charcoal drawing than a photo.  The reason is that you've broken the relationship between sharpness and image size because there simply isn't enough data in the original to determine whether or not those edges should really be that sharp.  It's a balance, really, but you can't get something for (from) nothing.  The super-sharp edge detection algorithms do work well for non-photographic images like computer graphics (pie charts, screen shots, geometric 3D renderings, etc.).

I also have to say that there's less need for the super-refined interpolation algorithms these days than in the past.  Most people have cameras that have enough pixels to do the job.  The emphasis now is on proper sizing for printing so that you get the most of the pixels you do have, regardless of how many.  Of course, that's what the smart sharpening in Qimage is designed to do.

Mike

True the MPs available did increase in time but it depends on the printer's size whether that is enough. I can not judge what this algorithm does, it was just to inform that it exists. I mentioned other extrapolation routines before as it happens that Qimage is also used for jobs with texts and (rasterised) vector formats, reproduction of geometric art, whatever content like that and the hybrid etc extrapolation are then not the most suitable ones. That Qimage allows the extrapolation choices is nice, if that can be extended with sensible choices for related print jobs, why not?


met vriendelijke groeten, Ernst Dinkla

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/


Logged
rayw
Sr. Member
****
Posts: 440


« Reply #5 on: March 02, 2010, 11:07:41 PM »

Hi Ernst,

The link you gave is interesting, but does not give much detail. However, it appears it is designed to work more with, if you like, information randomly selected, and makes an 'educated' guess at filling in the missing pieces for the rest of it. A bit like weather forecasting in the UK  Cheesy. Referring to their link on sparcity it states -
Quote
Compressive sensing is a new field which has seen enormous interest and growth. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered efficiently from what was previously considered highly incomplete measurements.
I have the feeling it is useful where much less data is available, and that data is not spread evenly over the image area, compared to the sort of high density infomation available in our current digital images. It may be useful, however, I suspect, if in some way the image was damaged, by perhaps a great number of the pixels being missed out, on specific types of image, depending where the damaged area was. But that is what the 'clone stamp tool' and others are used for. The example they showed, was sort of reversed engineered in photoshop, afaik, and I do not think the compressed sensing technique would actually work too well on that type of noise. There already exists good techniques for reducing noise in images. I tend to think that the algorithm could end up enhancing the noise, instead of the fundamental image, but as one of the comments mentioned, it may work well in combination with other techniques.

I think the problems which we face with upsizing our images are different. However, for text, at first thought I imagined it managing anti-aliasing, but thinking on it further, it would need limiting somehow, since I could see the whole result being blurred, and there is no chance it could fill in missing letters.  

Many folk seem to be satisfied with repetitive bicubic upsizing. In effect, I suppose that makes its own attempt at filling in missing data, possibly better than doing it with one hit.

Mind you, I am quite used to being wrong Wink

Best wishes,

Ray

A bit more. In the original efforts, the scan, they knew the sort of image they were going to get. The test image - the phantom- was known, too. Both images vaguely similar. The detail they are recovering, is not that high, I think, compared to the detail in one of our landscapes. We will see where it goes.
« Last Edit: March 02, 2010, 11:16:07 PM by rayw » Logged
Ernst Dinkla
Sr. Member
****
Posts: 410


Email
« Reply #6 on: March 03, 2010, 08:51:29 AM »

More than a year ago I mentioned these kind of scaling routines:
http://en.wikipedia.org/wiki/Pixel_art_scaling_algorithms

I think it isn't a bad idea to put possible additions to Qimage on the table, in the end it is Mike who decides and he is critical enough to sieve the silly things out. I do not ask that things like these extrapolation routines are implanted, it is just to bring them to the attention.

There is a list of wishes however. The better metric feedback request as put forward again some days ago is at the top of my list of unfulfilled wishes. The other one is the coloring of the mark corners shifted from the border color option to a place among the general preferences. I asked for the coloring choices and got it but the way it is implanted is a tricky one in practice. Next job printed may have the border light grey if you are not paying attention to that setting. In preferences you can have that mark corners color setting forever.


met vriendelijke groeten, Ernst Dinkla

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/



Logged
admin
Administrator
Forum Superhero
*****
Posts: 4220



Email
« Reply #7 on: March 03, 2010, 01:07:33 PM »

More than a year ago I mentioned these kind of scaling routines:
http://en.wikipedia.org/wiki/Pixel_art_scaling_algorithms

I looked into those too... a year ago.  They are awful for photographs!  They only work for pixel art (games and graphics with predictable line boundaries).  In photos they produce pronounced artifacts because they try to make connections that don't exist in the more random world of photographs versus the predictable world of computer graphics.

Mike
Logged
Ernst Dinkla
Sr. Member
****
Posts: 410


Email
« Reply #8 on: March 03, 2010, 02:23:57 PM »

More than a year ago I mentioned these kind of scaling routines:
http://en.wikipedia.org/wiki/Pixel_art_scaling_algorithms

I looked into those too... a year ago.  They are awful for photographs!  They only work for pixel art (games and graphics with predictable line boundaries).  In photos they produce pronounced artifacts because they try to make connections that don't exist in the more random world of photographs versus the predictable world of computer graphics.

Mike

I know, the message that went with it mentioned another use. The other con was that their use was limited to certain upsample ratios.


met vriendelijke groeten, Ernst Dinkla

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/


Logged
Seth
Sr. Member
****
Posts: 322



« Reply #9 on: March 31, 2010, 11:34:13 AM »

I just wonder if it isn't a basis for another de-noising algorithm.  Especially when not used as intensely as their sample which is way beyond anything I'd attempt to save.
Logged

Seth
<CWO4 (FMF) USN, Ret.>
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Security updates 2022 by ddisoftware, Inc.