1. Actually not always. For a given compressor technology the higher the pressure the higher the temperature where T2/T1=P2/P1^(gamma-1/gamma)/compressor_efficiency. The higher the pressure, the more efficient the engine, but also the lower the compressor efficiency. But what is a technological limit is the turbine inlet temperature. Now, there is a trade with more cooling allowing a higher turbine inlet temperature. A trade study is performed to meet maintenance promises. So at higher pressure ratios, the fuel air mixture must be leaned (all engines overall operate lean) to keep turbine inlet temperature from increasing.
2. No higher oxygen content. The higher the temperature, the greater the efficiency as there is more work by the turbines for a given drop in pressure ratio and thus more surplus pressure to power the fan (I'm assuming a high bypass engine). Higher pressure means more work. But for any given limit to turbine inlet temperature, there is a peak optimal efficiency pressure ratio. In many ways, this defines the generation of the engine.
3. Higher pressure means that at each turbine stage dH=V dP (way over-simplified) where dH is the change in enthalpy (potential work to extract). So for a greater pressure ratio, integrating that equation (which has many terms that are variable and thus in the real world complex to integrate) has more potential work.
4. Higher temperature is a higher V (greater volume for the same gas. So the same above equation applies dH= V dP.
H= enthanlpy which is where dH=dCp*T + Cp * dT
V= Volume of gas powering the turbine (higher temperature is higher volume)
dP is the change in pressure.
Now I work with the derivative as the integration is quite complex as I'm not going into all the terms that change as a function of pressure or specific volume (turbine blade clearance cleakage vs. flow area, drag per flow area in a turbine), temperature (mostly Cp, but also due to shape limits at the highest temperatures, turbine efficiency), pressure (at higher pressure, structural concerns impact efficiency due to the forces).
Optimum pressure ratio is a function of engine size (larger engines have lower leak path area to flow path area, so their compressor and turbine efficiency improves slightly and thus the optimum pressure ratio goes up.
Optimum pressure ratio is a function of compressor efficiency. Today's higher mach number compressors with lower blade counts (less blade area drag) are much more efficiency with the PW1100G being the extreame (today), but that is creating bearing/shaft issues.
Optimum pressure ratio is a function of turbine clearance control technology. This technology is always improving, so the optimal pressure ratio increases with time for new engines. The trick is designing for a good efficiency after overhaul (which has improved with dramatically better break in operations prior to delivery to the airlines).
Optimum pressure ratio is also a function of counter or co-rotation. SInce counter rotation really improves the efficiency of the first row of the low turbine, a counter rotating engine will have a higher optimum pressure ratio than a co-rotating.
Optimum pressure ratio is a function of combustor technology. For higher temperatures increase NOx formation and can limit allowed pressure.
Optimum pressure ratio is a function of turbine materials and cooling technology. Every improvement increases the optimum pressure ratio.
Optimum pressure ratio is also a function of combustor technology in that the combustor length keeps decreasing. Since this decreases required cooling, the pressure ratio goes up a little. Same with improvements in diffuser technology.
Variable cycle technology also increases the optimum pressure ratio. Basically, the variable cycle techology reduces the stress on the combustor and turbine during climb. This increases durability allowing for a higher pressure ratio. GE uses variable turbine cooling. Pratt was going to go with a variable fan nozzle, but decided to simplify the engine. But the idea is right, it will just take a higher thrust engine to make that variable cycle technology pay off. Expect more and more variable cycle techology as engines progress, with the payoff being greatest in the larger engines.
For example, I'm seeing some really agressive decisions made in the 777X engines (GE9x). To enable a higher pressure ratio, we're seeing much more agressive fuel injector cooling. It also has fuel cooling of parts that previously didn't need to be cooled due to the higher heats of that engine. (fuel is commonly used to cool parts of the engine as there is so much to use as a coolant).
The next technology shift with be ceramics, CMCs I believe will be next. This reduces required cooling allowing for higher compressor exit temperatures that pushes the technology to the next stage.
For more, you need to understand the thermodynamics of jet engines. Start with the Brayton cycle:https://en.wikipedia.org/wiki/Brayton_cycle
Then you need to look at propulsion efficiency:https://en.wikipedia.org/wiki/Propulsive_efficiency
Then it becomes a study into part throttle efficiency, manufacturing and maintenance costs, and risk of development. The part throttle efficiency (idle) is one reason all engines are now two or three shafts as the low spool barely turns at idle. It isn't spinning fast enough to do work on the air at idle (the compressor bleed will be full open to prevent compressor surge during the ramp from idle to takeoff or flight idle to full throttle for an aborted landing). Compressor surge is a poor term that actually describes when the compressor stalls and back flows (fuel/air mixture from the combustor back flows into the compressor and even out the front of the engine, it can thus be spectacular if that fuel ignites).
I hope this helped,