this post was submitted on 29 Dec 2025
530 points (98.9% liked)

Technology

78964 readers
3522 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] JeeBaiChow@lemmy.world 83 points 3 weeks ago (3 children)

Good read. Funny how I always thought the sensor read rgb, instead of simple light levels in a filter pattern.

[–] _NetNomad@fedia.io 32 points 3 weeks ago (1 children)

wild how far technology has marched on and yet we're still essentially using the same basic idea behind technicolor. but hey, if it works!

[–] GamingChairModel@lemmy.world 6 points 3 weeks ago

Even the human eye basically follows the same principle. We have 3 types of cones, each sensitive to different portions of wavelength, and our visual cortex combines each cone cell's single-dimensional inputs representing the intensity of light hitting that cell in its sensitivity range, from both eyes, plus the information from the color-blind rods, into a seamless single image.

[–] Davel23@fedia.io 20 points 3 weeks ago (3 children)

For a while the best/fanciest digital cameras had three CCDs, one for each RGB color channel. I'm not sure if that's still the case or if the color filter process is now good enough to replace it.

[–] CookieOfFortune@lemmy.world 8 points 3 weeks ago (2 children)

There are some sensors that have each color stacked vertically instead of using a Bayer filter. Don’t think they’re popular because the low light performance is worse.

[–] GreyEyedGhost@piefed.ca 4 points 3 weeks ago

This was sold by Foveon, which had some interesting differences. The sensors were layered which, among other things, meant that the optical effect of moire patterns didn't occur on them.

[–] Natanael@infosec.pub 1 points 3 weeks ago

Some Sony phones have that type of sensor

[–] lefty7283@lemmy.world 7 points 3 weeks ago (1 children)

At least for astronomy, you just have one sensor (they’re all CMOS nowadays) and rotate out the RGB filters in front of it.

[–] trolololol@lemmy.world 4 points 3 weeks ago (1 children)

Is that the case for big ground and space telescopes too? I can imagine this could cause wobbling.

Btw is that also how infrared and x-ray telescopes work as well?

[–] lefty7283@lemmy.world 6 points 3 weeks ago

It sure is! The monochrome sensors are also great for narrowband imaging, where the filters let through one specific wavelength of light (like hydrogen alpha) which lets you do false color imaging.

IR is basically the same. Here’s the page on JWST’s filters. No clue about xray scopes, but IIRC they don’t use any kind of traditional CMOS or CCD sensor.

[–] worhui@lemmy.world 2 points 3 weeks ago

3chip cmos sensors are about 20-25 years out of date technology. Mosaic pattern sensors have eclipsed them on most imaging metrics.

[–] TheBlackLounge@lemmy.zip 13 points 3 weeks ago

You could see the little 2x2 blocks as a pixel and call it RGGB. It's done like this because our eyes are so much more sensitive to the middle wavelengths, our red and blue cones can detect some green too. So those details are much more important.

A similar thing is done in jpeg, the green channel always has the most information.

[–] tyler@programming.dev 49 points 3 weeks ago (1 children)

This is why I don’t say I need to edit my photos, but instead I need to process them. Editing is clearly understood by the layperson as Photoshop and while they don’t understand processing necessarily, many people still understand taking photos to a store and getting them processed from the film to a photo they can give someone.

[–] Fmstrat@lemmy.world 3 points 3 weeks ago

As a former photographer back when digital was starting to become the default, I wish I had thought of this.

[–] GamingChairModel@lemmy.world 24 points 3 weeks ago (1 children)

This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.

[–] XeroxCool@lemmy.world 5 points 3 weeks ago

Same, especially because I'm a frequent sky-looker but have to prepare any ride-along that all we're going to see by eye is pale fuzzy blobs. All my camera is going to show you tonight is pale sprindly clouds. I think it's neat as hell I can use some $150 binoculars to find interstellar objects, but many people are bored by the lack of Hubble-quality sights on tap. Like... Yes, and then sent a telescope to space in order to get those images.

That being said, I once had the opportunity to see the Orion nebula through a ~30" reflector at an Observatory, and damn. I got to eyeball about what my camera can do in a single frame with perfect tracking and settings.

[–] nycki@lemmy.world 23 points 3 weeks ago

Good post! Always nice to see actual technology on this sub.

[–] ryrybang@lemmy.world 15 points 3 weeks ago (2 children)

How do you get a sensor data image from a camera?

[–] ada@piefed.blahaj.zone 20 points 3 weeks ago

RAW files. Even then, you mostly see the processed result based on whatever processing your raw image viewer/editor does. But if you know how to get to it and use it, the same raw sensor capture data is there

[–] forks@lemmy.world 17 points 3 weeks ago (1 children)

Most cameras will let you export raw files, and a lot of phones do as well(although the phone ones aren't great since they usually do a lot of processing on it before giving you the normal picture)

[–] trolololol@lemmy.world 1 points 3 weeks ago (1 children)

My understanding is that really raw phone data also have a lot of lens distortion, and proprietary code written by the camera brand has specific algorithms to undo that effect. And this is the part that phone tinkerers complain is not open source (well, it does lots of other things to the camera too).

[–] brucethemoose@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

Modern mirrorless cameras do this too. For example, this is what my Canon kit lens looks like with/without digital barrel distortion correction:

img

img

Not my photos. From: https://dustinabbott.net/2024/05/canon-rf-s-18-45mm-f4-5-6-3-is-stm-review/

And https://dustinabbott.net/2024/04/canon-rf-24-50mm-f4-5-6-3-is-stm-review/

img

My own unprocessed RAWs are pretty wild. But (IMO) it’s a reasonable compromise to make lenses cheaper and better, outside of some ridiculous examples like the 24-50.

[–] dusty_raven@discuss.online 12 points 3 weeks ago (1 children)

I'm a little confused on how the demosaicing step produced a green-tinted photo. I understand that there are 2x green pixels, but does the naive demosaic process just show the averaged sensor data which would intrinsically have "too much" green, or was there an error with the demosaicing?

[–] stealth_cookies@lemmy.ca 19 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Yes, given the comment about averaging with the neighbours green will be overrepresented in the average. An additional (smaller) factor is that the colour filters aren't perfect, and green in particular often has some signficant sensitivity to wavelengths that the red and blue colour filters are meant to pick up.

edit: One other factor I forgot, green photosites are often more sensitive than the red and blue photosites.

[–] ryannathans@aussie.zone 7 points 3 weeks ago (1 children)

Plus human eye is more sensitive to green than other channels

[–] eleijeep@piefed.social 8 points 3 weeks ago

Green is not a creative colour.

[–] worhui@lemmy.world 11 points 3 weeks ago

Not sure how worth mentioning it is considering how good the overall write up is.

Even though the human visual system has a non-linear perception of luminance. The camera data needs to be adjusted because the display has a non-linear response. The camera data is adjusted to make sure it appears linear to the displays face. So it’s display non-uniform response that is being corrected, not the human visual system.

There is a bunch more that can be done and described.

[–] bookmeat@lemmynsfw.com 10 points 3 weeks ago (1 children)

I think the penultimate photo looks better than the final one that has the luminance and stuff balanced, but maybe that's just me.

[–] brucethemoose@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

It’s not just you.

Zooming in, I feel like the “camera jpeg” lost sharpness to recompression.

It’s kind of insane that cameras either dump raw data, or do all this magic only to throw so much away to an ancient image codec that loses even more when recompressed.

Newer ones can save a HEIF or a “lossy RAW” in some circumstances (which is an infinite improvement), but still; I eagerly await the day cameras can save a JPEG-XL all by themself, and that I can post them on the Fediverse.

[–] Slashme@lemmy.world 5 points 3 weeks ago
[–] TVA@thebrainbin.org 4 points 3 weeks ago (1 children)

Is this you? If so, my wife wonders what camera and software you used!

[–] trolololol@lemmy.world 2 points 3 weeks ago

This info may still be present in the files, download them and inspect with any software that displays that kinda info. I'm not proficient in that, I am just a nerd that has done it a decade ago when I was into photography.

[–] confuser@lemmy.zip 4 points 3 weeks ago
[–] avidamoeba@lemmy.ca 1 points 3 weeks ago

That's crazy.

[–] PalmTreeIsBestTree@lemmy.world -2 points 3 weeks ago

Fake dear head