Transforming Video Game Cameras into Eyes

OK to start what is on every 4K TV screen at a store? Its some super duper nature thing with tons of detail. In real life you would never be able to see all the detail at once. Your eyes fixate on movement and jump around in the focal length. So the clarity looks cool in real life but maybe not so much in video games. Commonly EMBs add a focusing feature to simulate your eyes. It looks nice, but it can often break immerson. For example I was playing GTA5 on PC the other day... Their I was trying to shot at a person behind a bus. The only problem was I couldn't see him! The game kept focusing on the bus, then the buildings, then back to the bus! So let's fix this shall we? (or at least have some fun with it)

1: Getting the data
1.A: IRL Data
So my original thought was some crazy camera scheme to measure the eye and how its focusing. Well that's dumb because your staring at a flat screen, there is no reason for your eye to change. Therefore the depth measurement will need to be software. My better idea is to use eye tracking technology to point to were your looking.
1.B: Virtual Data
Now that we know where the user is looking we can figure out the distance to that object/location. IDK to go about this though, I'm only a amateur programmer.

2: Using the data
Now this needs a lot of calibration for it to not be so jarring. First is to set whatever the eye is on as to the main focus. Second is to measure the distance from that object to other objects. The rate at which the distance blurs would need to be adjusted and tested.
Their would still be things to figure out though..
1. Does the eye adjust after or during movement?
2. How fast should the focus be adjusted? Are their other factors that may affect this?
3. Something that I'm probably not thinking of.

3: Will you help me?
I'm only an amateur programmer that works and goes to school. I will need ALOT of help to make this thing a reality. I would probably need help in these fields.
1. OpenGL programmers. First goal is just a test demo. I think this would be the best coding format.
2. Anyone who has access to that eye tracking thing. Idk what its called, I'll post a link to it later.
3. JUST ABOUT ANYONE

4: Goals
This for me will probably only be a
summer time project. Just an idea. I have no problem taking it further for mods or anything else. My main goal is to just see if it works. Is this feasible? Probably in some OpenGL tech demo.

Sorry about any spelling errors and the format. I'm on a phone, in a car. I'm also really car sick right now. I'll try to fix it later.

I think there is a better argument to be made for always rendering as much of the scene as possible in focus. The field of view of our eyes is tiny, I don't remember exact numbers but its like just a few arc-minutes. In the real world, it's reasonable to say everything is perfectly rendered and in focus at all times, and it's our eyes that can't process so much data. So if you render a scene perfectly in focus, human biology takes over and provides the restriction for you. Things in your peripheral vision are technically in focus because they've been rendered that way, but since your field of vision is so limited, it doesn't matter.

That said, eye tracking is really interesting tech, and it reveals a lot of the hidden biological error correction we do, when you move your eye (saccade) to a new target you almost always overshoot and then perform a second smaller saccade to get on target.

This comic has a reasonable description of human FoV