Capturing any data or making any measurement is an approximation, because every type of sensor has a limited degree of accuracy - with some more sensitive than others.
I think there is a clear enough line however between making an approximated record of a value, and making a guess at a value, the latter being essentially how these “AI” camera systems work.
First, current cameras that we consider to not use AI still manipulate images beyond just attempting to approximate the scene. They may not allow easy face swapping but they still don’t faithfully represent the scene much of the time.
Also, I don’t even think it is clear where we can draw a line between “normal” algorithms and “AI” algorithms. What level of machine learning is required before we consider an alrogitm to be AI?
Simple non-AI algorithms and generative AI are on a spectrum of comlexity. They aren’t discrete from one another such that they can be easily categorized.
There’s a clear difference between a processing mistake and an intentional addition. That’s a fairly clear line.
Grain on a photo is not the same as making you look like a human head on a shark’s body.
Yes, no photo is 100% accurate, just as no statement will ever capture an incident perfectly. That doesn’t mean there’s no such thing as lying.
There is definitely a line.
Is the tech trying to make the photo more accurate, or change the narrative?
Sure, there’s some tech that’s awkwardly in the middle, like skin smoothing, but we don’t NEED that, and it’s not directly changing the narrative of the story, unless you’re selling acne medication.
I still disagree that there is a clear line. Yes, it is obvious that photo grain is different from making you look like a human head on a shark’s body. The problem is somewhere in the middle. Determining where that line is drawn is going to be difficult and is only going to become more difficult as this technology advances.
I think the line (while the details may be certainly difficult) is along “are you making the existing image/story clearer or are you changing the narrative of the media?
When the story you get from the image changes, then you’ve crossed the line.
Capturing any data or making any measurement is an approximation, because every type of sensor has a limited degree of accuracy - with some more sensitive than others.
I think there is a clear enough line however between making an approximated record of a value, and making a guess at a value, the latter being essentially how these “AI” camera systems work.
I disagree. It’s not that easy to draw a line.
First, current cameras that we consider to not use AI still manipulate images beyond just attempting to approximate the scene. They may not allow easy face swapping but they still don’t faithfully represent the scene much of the time.
Also, I don’t even think it is clear where we can draw a line between “normal” algorithms and “AI” algorithms. What level of machine learning is required before we consider an alrogitm to be AI?
Simple non-AI algorithms and generative AI are on a spectrum of comlexity. They aren’t discrete from one another such that they can be easily categorized.
I think that’s disingenuous…
There’s a clear difference between a processing mistake and an intentional addition. That’s a fairly clear line. Grain on a photo is not the same as making you look like a human head on a shark’s body.
Yes, no photo is 100% accurate, just as no statement will ever capture an incident perfectly. That doesn’t mean there’s no such thing as lying.
There is definitely a line.
Is the tech trying to make the photo more accurate, or change the narrative?
Sure, there’s some tech that’s awkwardly in the middle, like skin smoothing, but we don’t NEED that, and it’s not directly changing the narrative of the story, unless you’re selling acne medication.
I still disagree that there is a clear line. Yes, it is obvious that photo grain is different from making you look like a human head on a shark’s body. The problem is somewhere in the middle. Determining where that line is drawn is going to be difficult and is only going to become more difficult as this technology advances.
I think the line (while the details may be certainly difficult) is along “are you making the existing image/story clearer or are you changing the narrative of the media?
When the story you get from the image changes, then you’ve crossed the line.
I generally agree with you but that is still a fuzzy line to draw that is likely very dependent on circumstances. The devil is in the details.
I can concede to that… there will be some grey area, but the idea that “there is no true photo” or “there is no truth” feels wrong.