An intuitive (read: incredibly non-rigorous) argument for why 2^k-3^x >> x

This problem of minimizing 2-k-3^x boils down to finding a rational approximation k/x > log(3)/log(2)
Any approximation will have some error which we will call e for epsilon
k/x + e = log(3)/log(2)

Using PARI/GP we can get some idea of what this error term looks like. To do this we will utilize a very efficient function bestappr that returns the best possible rational approximation within a given bound for the denominator.

One caveat here is bestappr will return values slightly greater OR less than the target value (meaning we could end up with 2^k-3^x < 0). But we will ignore that for the moment by checking absolute value as it shouldn’t affect the overall trends significantly.

\p100000  /* set internal precision to 100,000 digits */
default(format,"g.20"); /* only display 20 digits */
ratio = log(3)/log(2);  /* calculate ratio of logs */
{ /* loop through increasing bounds of best approximations and output epsilon */
for(e2=0,3,
  for(e1=1,9,    
    print("x < 10^", e1*10^e2, " ==> e ~= 10^", log(abs(bestappr(ratio,10^(e1*10^e2)) - ratio)) / log(10) )
  )
)
}
x < 10^1 ==> e ~= 10^-1.8228243804796405014
x < 10^2 ==> e ~= 10^-4.2453433039150728730
x < 10^3 ==> e ~= 10^-7.0236219620428187197
x < 10^4 ==> e ~= 10^-7.0236219620428187197
x < 10^5 ==> e ~= 10^-10.176249408780654612
x < 10^6 ==> e ~= 10^-12.311194554306875613
x < 10^7 ==> e ~= 10^-12.311194554306875613
x < 10^8 ==> e ~= 10^-16.030274407345733159
x < 10^9 ==> e ~= 10^-18.415578612881165229
x < 10^10 ==> e ~= 10^-20.654186555569145934
x < 10^20 ==> e ~= 10^-40.199156849075345448
x < 10^30 ==> e ~= 10^-59.920842552220234471
x < 10^40 ==> e ~= 10^-80.346429225035173909
x < 10^50 ==> e ~= 10^-99.937406132051136127
x < 10^60 ==> e ~= 10^-119.97127515673137044
x < 10^70 ==> e ~= 10^-139.69695015294840907
x < 10^80 ==> e ~= 10^-160.44395560441521028
x < 10^90 ==> e ~= 10^-179.92725464020852881
x < 10^100 ==> e ~= 10^-200.45524463059795440
x < 10^200 ==> e ~= 10^-399.22301115725001623
x < 10^300 ==> e ~= 10^-599.67122224094073456
x < 10^400 ==> e ~= 10^-801.35404829404855956
x < 10^500 ==> e ~= 10^-999.67221566437351735
x < 10^600 ==> e ~= 10^-1199.6212469951420429
x < 10^700 ==> e ~= 10^-1400.4543654430773087
x < 10^800 ==> e ~= 10^-1599.9916384554983445
x < 10^900 ==> e ~= 10^-1798.5552418279736033
x < 10^1000 ==> e ~= 10^-2000.1239828667323040
x < 10^2000 ==> e ~= 10^-4000.3518425515634228
x < 10^3000 ==> e ~= 10^-6000.1168082826190546
x < 10^4000 ==> e ~= 10^-8000.0298060931941417
x < 10^5000 ==> e ~= 10^-9998.2083264968528684
x < 10^6000 ==> e ~= 10^-11999.298228127592448
x < 10^7000 ==> e ~= 10^-14000.016936690634498
x < 10^8000 ==> e ~= 10^-15999.534387654535336
x < 10^9000 ==> e ~= 10^-18000.434237791617552
plus a few more
x < 10^10000 ==> e ~= 10^-20000.013398205843751
x < 10^20000 ==> e ~= 10^-39998.619814889119685
x < 10^50000 ==> e ~= 10^-99999.94237051579762

We can see from these results that if x is on the order of 10^y, then e seems pretty well bound to 10^-(2*y±2)
Or alternatively: e ~= 1/x^2

Next I would like to examine roughly how small would e need to be to satisfy 2^k-3^x < x
Rearranging: k/x + e = log(3)/log(2)
we get k = x*(log(3)/log(2)-e)
Then we can substitute k in 2^k-3^x < x to get
2^(x*(log(3)/log(2)-e))-3^x < x
I’m not sure how to solve directly for e here but I can at least test some increasingly negative powers of 10 for e until one satisfies, to get some feel for it:

\p10000
for(ex=1,4, my(x=10^ex); for(e1=1,10^ex, if(2^(x*log(3)/log(2)+10^-e1) - 3^x < x, print("x=",x, "   e ~= 10^-", e1); break; )))

Which gives us

x=10   e ~= 10^-4
x=100   e ~= 10^-46
x=1000   e ~= 10^-474
x=10000   e ~= 10^-4768

So that seems to be something like e < 10^-(0.47*x)
Which is absurdly smaller than e ~= 1/x^2 that we got before

1 Like

I have a feeling that your test with bestappr is prone to errors due to the computer calculations. That could explain why you are finding such different results.

PARI/GP can handle arbitrary precision calculations. In this case I set the precision high enough that it should not affect outcome. Not sure what you mean about such different results. The error bounds are surprisingly tight (following roughly 1/x^2) in my opinion across very wide range of magnitudes.

Maybe I didn’t explain it well enough, but the first part is showing the absolute best case error range within a bound of x. And the second part is saying what kind of error would be required to satisfy the inequality.
The fact that the actual precision of best case narrows quadratically with respect to x, while the error required is exponential is the whole point that it is hopelessly out of reach and diverging incredibly fast.

I see! I had misread your explanations.