Mathematics Stack Exchange News Feeds

  • Why $f(z+1)=f(z)$ implies $f$ can be expressed as a function of $e^{2\pi iz}$
    by roydiptajit on July 31, 2021 at 7:07 pm

    I am reading modular forms from J.P.Serre’s book, where I came across a complex function which satisfies property $f(z+1)=f(z)$. Then, it is mentioned that we can express $f$ as a function of $e^{2\pi iz}$. I can see that any function expressed as a function of $e^{2\pi iz}$ always satisfies the above property, but how the converse is true?

  • Can $\pi$ be defined in a p-adic context?
    by doetoe on July 31, 2021 at 5:27 pm

    I am not at all an expert in p-adic analysis, but I was wondering if there is any sensible (or even generally accepted) way to define the number $\pi$ in $\mathbb Q_p$ or $\mathbb C_p$. I think that circles, therefore also angles, are problematic in a p-adic context, but $\pi$ appears in many other contexts. Of course there are many known series that sum to $\pi$, some may converge p-adically, but those that converge may have different limits, and I think some more motivation would be needed to designate one as an analog of $\pi$. Maybe one could find an analog based on $e^{n\pi i} = (-1)^n$, or even $\int_{\mathbb R} e^{-x^2/2}dx = \sqrt\pi$. So my question: Is there or are there p-adic definitions of $\pi$? If not, could we sensibly define $\pi_p$, and how?

  • Finding $\displaystyle \lim_{n\to \infty} (x_0 x_1…x_n)\sqrt{n}$ where $x_{n+1}=x_n^3-x_n^2+1$, $x_0=\frac{1}{2}$
    by TheZone on July 31, 2021 at 2:59 pm

    Let $(x_n)$ be the sequence defined by $x_0=\frac{1}{2}$ and $x_{n+1}=x_n^3-x_n^2+1$ for any $n\in \mathbb{N}\cup \{0\}$. Find $\displaystyle \lim_{n\to \infty} (x_0 x_1…x_n)\sqrt{n}$. According to the answer sheet, this limit equals $1$. However, I can’t manage to solve it. Here is what I’ve done. Obviously, $x_{n+1}-x_n=(x_n-1)^2(x_n+1)>0$ (it is easy to observe that all the terms of the sequence are positive), so $(x_n)$ is a strictly increasing sequence. Let us now prove by induction on $n$ that $x_n<1$ for all $n\in \mathbb{N}\cup \{0\}$. The base case is obvious, so suppose it holds for $n$ and prove it for $n+1$. $x_{n+1}=x_n^2(x_n-1)+1<1$ by the induction hypothesis and we are done. Hence, $(x_n)$ is monotone and bounded, so it is convergent. It is easy to see now that $\displaystyle \lim_{n\to \infty}x_n=1$. Now I pretty much got stuck. I tried to use the epsilon definition of a limit, trying to exploit $\displaystyle \lim_{n\to \infty}x_n=1$, but it didn’t help. Maybe I should use Stolz-Cesaro on the limit that I want to compute?

  • Use mathematical induction to prove that no matter how we pick $n + 1$ numbers from $1, 2, \ldots , 2n$, one of them will be a divisor of another one.
    by Mister Pro on July 31, 2021 at 2:50 pm

    Use mathematical induction to prove that no matter how we pick $n + 1$ numbers from $1, 2, \ldots , 2n$, one of them will be a divisor of another one. I understand how to use the pigeonhole principle to prove this, but I do not know how to use mathematical induction to prove it. They are supposed to form another proof once they are both completed, although I am unsure on what proof that would be, as I haven’t proved this statement using induction. What would the proof by induction be? What is that other proof? Any kind of help is appreciated! Thank you!

  • Ideal topology on commutative ring
    by PTom on July 31, 2021 at 11:57 am

    One small question on the book ‘Real and Functional Analysis’ by S. Lang: Example 6 on page 21: Let $R$ be a commutative ring. We define a subset $U$ of $R$ to be open if for each $x\in U$ there exists an ideal $J$ in $R$ such that $x+J\subseteq U$. This is called the ideal topology. It seems to me that this definition is useless: All sets will be open because we can always pick $J=0$. Unfortunately I couldn’t find anything related to this ‘ideal topology’ via a Google search, so I don’t know what is meant in the book. Perhaps requiring $J\ne0$?

  • Solve for $x$: $ 4^{4x}-4^x=(4x)!$
    by Jitendra Singh on July 30, 2021 at 3:13 pm

    Solve for $x$: $$ 4^{4x}-4^x=(4x)! $$ (for all real values) What is the biggest problem is that induction is not taught to me yet and hence can’t be used. Attempt $1$: Assume $4^x=t$. So we get $t^4-t=24 (x!)$. Letting $u$ denote $x!$ $$ t^4-t-24u=0 \implies u=\frac{t}{24}(t^3-1).$$ Attempt $2$: Taking $4^x$ out as common $$ 4^x(4^{3x}-1)=4x(4x-1)(4x-1)…(2)(1) $$ However this also yielded nothing. Attempt $3$: What I tried was did it by brute force- Our equation is $4^{4x}-4^x-(4x)!=0$ Let $x$ be $-1$ and only taking the LHS $$ 4^{-4}-4^{-1}-(4(-1))! = \text{ Infinity }$$ We can exclude all negative cases as the value can’t be negative but negative factorial don’t exist. So putting the value of $0$: $$ 4^0-4^0-0! \implies -1 $$ Using value of $1$: $$ 4^4-4-4! \implies 228 $$ This implies the value will be between 1 and 2. However Wolfram Alpha result (which I will discuss at end) don’t agree with me. So how do I find the answer to this question without induction? Also why didn’t Approach 3 worked for me? Also according to Wolfram Alpha over here how can I find such complex roots? Please help to solve this question?

  • Is $4| \left(\prod_{i=1}^n a_i-\prod_{i=1}^n b_i\right)$ always true?
    by Ritam_Dasgupta on July 29, 2021 at 5:39 pm

    Question: Suppose $a_i$, and $b_i$ are all integers, $1\leq i\leq n$, and the following conditions are known: $$\sum a_i=\sum b_i {\tag 1}$$ For every $k \in \mathbb{Z}$, where $2\leq k \leq n-1$, we have $i_1, i_2,…,i_k \in \{1,2,…,n\}$, and $i_1, i_2, …, i_k$ are all distinct, then it is true that: $$\sum_{cyclic} a_{i_1}\cdot a_{i_2}…\cdot a_{i_k}=\sum_{cyclic} b_{i_1} \cdot b_{i_2} \cdot…\cdot b_{i_k} {\tag 2}$$ If $(1)$ and $(2)$ are true, is it true that $\prod a_i \equiv \prod b_i \mod 4$? If this is true, is this property unique to$\mod 4$? Motivation: I came across this post here today, and this problem is the generalized case. I have posted a solution via brute-force for the linked question, for the $n=3$ case, where it holds. But my approach provided no insight for a general solution, which would be more interesting to me. Attempt: I tried to come up with an alternative solution for the $n=3$ case itself, hoping that perhaps that would be useful for a generalization. But even this I couldn’t finish. Here is my attempt: $$a+b+c=w+z+y {\tag 3}$$ and $$ab+bc+ac=wz+wy+zy {\tag 4}$$ We wish to arrive at $$abc \equiv wyz \mod 4$$ Squaring $(3)$ and using $(4)$ allows us to arrive at $a^2+b^2+c^2=w^2+z^2+y^2$. This means that $a^3+b^3+c^3-3abc= w^3+y^3+z^3-3wyz$ Hence, if I could show that $$\sum a^3 \equiv \sum w^3 \mod 4$$ our problem is finished. But this is where I got stuck. Help in this problem would be appreciated.

  • How do I solve this integral $\int_{0}^{2\pi}e^{-\sin^{2}(x)}\cos\left(6x-\frac{\sin(2x)}{2}\right)\,dx$? [duplicate]
    by Rene Morningstar on July 29, 2021 at 3:39 pm

    I am interested in various solutions to this integral, here is one of the versions: $$…=\text{Re}\int\limits_0^{2\pi}e^{-\sin^2x+i\left(6x-\frac{\sin(2x)}{2}\right)}dx=e^{-\frac{1}{2}}\text{Re}\int\limits_0^{2\pi}e^{\frac{\cos(2x)}{2}+6ix-\frac{i\sin(2x)}{2}}dx=$$$$=e^{-\frac{1}{2}}\text{Re}\int\limits_0^{2\pi}e^{6ix+\frac{e^{-2ix}}{2}}dx=e^{-\frac{1}{2}}\text{Re}\int\limits_{|z|=1}\frac{z^6\cdot e^{\frac{1}{2z^2}}}{iz}dz=-e^{-\frac{1}{2}}\text{Re}\ i\int\limits_{|z|=1}z^5\cdot e^{\frac{1}{2z^2}}dz=$$$$=2\pi e^{-\frac{1}{2}}\text{Re}\left(\mathrm{Res}_0z^5\cdot e^{\frac{1}{2z^2}}\right)=\frac{2\pi e^{-\frac{1}{2}}}{48}=\frac{\pi}{24\sqrt e}.$$

  • Proof that f = g a.e. with Fatou’s lemma
    by Flo on July 29, 2021 at 3:21 pm

    In the book “A user friendly introduction to Lebesgue measure and integration” by Nelson, Exercise 24 in Ch. 2 states: Let $(f_n)$ be a sequence of functions in $\mathcal L [a,b]$. Suppose $f \in \mathcal L [a,b]$ and $$ \lim_{n\to \infty}\int_a^b|f_n-f| = 0. $$ If the sequence $(f_n)$ converges pointwise a.e. on $[a,b]$ to the function $g$, show that $f=g$ a.e. on $[a,b]$. Suggestion: Consider the sequence $(|f-f_n|)$ and Fatou’s Lemma. My try: From $\lim_{n\to\infty} f_n =g$ a.e., we know that $\lim_{n\to\infty} f_n -f = g – f$ a.e. and $\lim_{n\to\infty} | f_n -f | = | g – f|$ a.e.. Integrating, we have \begin{align} 0\leq \int_a^b|g-f|&=\int_a^b\lim_{n\to\infty}|f_n -f|=\int_a^b\liminf_{n\to\infty}|f_n -f|\\ &\leq \liminf_{n\to\infty}\int_a^b|f_n -f| = 0, \end{align} where the second inequality follows from Fatou’s lemma and the last step from the assumption that $\lim_{n\to \infty}\int_a^b|f_n-f| = 0$. We conclude that $|g-f| = 0$ a.e.. Is this the correct direction to approach the exercise? I don’t see how to go from the above result to showing that $f=g$ a.e.. I see that $\int_a^b(g-f) \leq \int_a^b|g-f|$…

  • Problems with seemingly not enough information
    by Yly on July 28, 2021 at 6:35 pm

    Two of my favorite geometry problems are as follows: Consider two concentric circles with the property that a chord of the larger circle with length 20 is tangent to the inner circle. What is the area of the region between the circles? Consider a sphere with a circular hole drilled through its center, such that the height of the remaining ring (in the direction along the hole) is $h$. What is the volume of the ring? The fascinating thing about these problems is that they seem to be under-determined: In the first case, it seems that you should need to know at least one of the circles’ radii; in the second case it seems that you should need to know the radius of the sphere or the radius of the hole. It turns out that the answer is independent of these unknown quantities, however, so the questions are well posed. Another cute fact about these problems is that, supposing them to be well posed, they admit very easy computations of their answers, since we can choose the unknown parameters to be whatever we want to facilitate the computation: In the first case, choosing the radius of the inner circle to be zero, the chord is a diameter of the outer circle, and the desired area is just the area of the same circle, $\pi \times 10^2 = 100 \pi$. In the second case, choosing the radius of the hole to be zero, the volume is just that of a full sphere of radius $h/2$, i.e. $\frac{1}{6}\pi h^3$. Question: What are other examples of problems which seem to be ill-posed, but are not? I once thought that these two problems were anomalous, but I’ve recently discovered there are other examples. (I will post one if no one else does.) The examples need not come from geometry. Please post only one problem per answer, and if (as with the above problems) a computation is simplified by assuming the problem to be well posed, please explain.

  • Show $\lim\limits_{t\to\infty}\Bigg|\sum\limits_{n=0}^\infty\Theta(t-nR)\frac{(\Gamma(t-nR))^n}{n!}e^{-\Gamma(t-nR)}\Bigg|^2=\frac1{(1+\Gamma R)^2}$
    by Kiryl Pesotski on July 28, 2021 at 6:17 pm

    I have encountered the following problem while studying non-Markovian effects in real-time dynamics of open quantum systems. In particular, I was studying a system comprised of two qubits (qubit is a standard shorthand for two level quantum system) separated by distance (in configuration space e.g. a laboratory) $R$ from one another and coupled to a one-dimensional Bosonic reservoir hosting a pair of boson species corresponding to right and left moving photon fields with linear dispersion relation and fixed propagation speed (equal to $1$ in my units (Planck’s constant is set to be $2\pi$)). It is really one of the simplest system one can imagine which possesses the so-called delayed coherent quantum feedback, a property that dynamics of the quantum system at any time $T$ depends on the entire history of its dynamics for all times $t\in[0, T]$ in deterministic (i.e. classical, not quantum, and thus very human-being controllable) fashion. You can easily guess that without any knowledge of physics whatsoever, you have quantum information that propagates with finite velocity $v=1$ over given distance $R$, so there is an intrinsic time delay $\tau=vR$ in the system and due to Lorentz covariance this cannot be altered by any quantum effects. The Hamiltonian operator of such a system can be written as $H=H_{0}+V$, where \begin{align}H_{0}&=\sum_{n=1, 2}\frac{\Delta_{n}}{2}\sigma_{3}^{(n)}+\sum_{\mu=1, 2}\int_{-\infty}^{\infty}dk\omega_{\mu}(k)a^{\dagger}_{\mu}(k)a_{\mu}(k),\\V&=\sum_{n=1, 2}\sum_{\mu=1, 2}\int_{-\infty}^{\infty}dkg_{\mu, n}(k)a^{\dagger}_{\mu}(k)\sigma_{+}^{(n)}+g_{\mu, n}^{*}(k)\sigma_{-}^{(n)}a_{\mu}(k).\end{align} Here $\Delta_{n}$ is the detuning of the qubit number $n$ from some reference energy $k_{0}$, i.e. $\Delta_{n}=\Omega_{n}-k_{0}$, where $\Omega_{n}$ is the transition frequency of the qubit number $n$ and $k_{0}$ is the momentum around which the spectrum of the bath of boson fields was linearised (we of course expect photons of energy ($hvk_{0}/(2\pi)=k_{0}$) $k_{0}$ to couple most strongly to some system with transition frequency $\min[\Omega_{1}, \ \Omega_{2}]$). Further, $\omega_{\mu}(k)=hvk/(2\pi)=k$ is the energy of the photon of flavour $\mu$ and momentum $k$. Such a photon is created by operator $a^{\dagger}_{\mu}(k)$ and destroyed by $a_{\mu}(k)$, these obey non-zero commutation $[a_{\mu}(k), a^{\dagger}_{\mu’}(k’)]=\delta(k-k’)\delta_{\mu, \mu’}$. $\sigma_{j}^{(1)}=\sigma_{j}\otimes\sigma_{0}, \ \sigma_{j}^{(2)}=\sigma_{0}\otimes\sigma_{j}$ , where $\sigma_{\pm}=(\sigma_{1}+i\sigma_{2})/2$, and $\sigma_{1,2, 3}$ are usual Pauli-matrices ($\sigma_{0}$ is an identity on $\mathbb{C}^{2}$). The coupling constants are defined as $$g_{\mu, n}(k)=\sqrt{\frac{\Gamma_{n}}{2\pi}}e^{-ic_{\mu}c_{n}(k_{0}+k)R/2},$$ where $\Gamma_{n}$ is the bare decay rate of a single qubit into the continuum, and $c_{s}=(-)^{s+1}, \ s=\mu, \ n$ distinguishes the coupling to right/left (with index $\mu$) photons of an atom number $n=1, 2$, located at $\pm R/2$. As a mock up problem I consider the following. Consider the system described above Hamiltonian prepared at time $t_{0}=0$ in the following 2-parameter family of states $$|\psi(0)\rangle=(\cos\vartheta|1\rangle\otimes|0\rangle+e^{i\varphi/2}\sin\vartheta|0\rangle\otimes|1\rangle)\otimes|\Omega\rangle.$$ Here the quantum states are ordered as qubit 1, qubit 2, bath, i.e. $|a\rangle\otimes|b\rangle\otimes|c\rangle$ = qubit one is in state $a$, qubit $2$ is in $b$ and bosons are in $c$. Here $|\Omega\rangle$ is the “vacuum” state of bosons defined by $a_{\mu}(k)|\Omega\rangle=0,\forall \mu=1,2 , \ k\in\mathbb{R}$. In physics you’ll call such a setup a spontaneous emission problem. I was able to deduce with the help of diagrammatic techniques (see our recent paper https://arxiv.org/pdf/2101.07603.pdf related to the $T\rightarrow\infty$ limit of non-Markovian systems) that the exact survival probability amplitude for an initial state defined above has the form (this result is exact in all parameter regime iff $\Theta(t)$ the Heaviside step function is defined equal to $1$ at $t=0$, it is indeed ambiguous since we use use both Hille–Yosida theorem and Sokhotski-Plemelj theorem which are good until $T=t_{0}=0$ where divergencies happen, then you have to maintain the order of limits with some care) $$P(t)=|a(t)|^{2},\quad a(t)=\oint_{C_{+\eta}}\langle{\psi(0)|G(z)|\psi(0)\rangle}e^{-izt}\frac{dz}{2\pi i}, \ t>0.$$ Here the integration contour $C_{+\eta}$ is a Bromiwich style contour suspending itself above the real axis with positive imaginary part $\eta\searrow0$. $G(z)$ is the operator valued function known in physics as the retarded Green’s function. One thus sees that all important information about dynamics is contained in the poles and branch cuts of $G(z)$. The projection of the retarded Green’s function onto the single atomic excitation subspace can be determined analytically in the exact form: $$ G^{(1)}(z)=\frac{1}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}(g_{1}(zc|10\rangle\langle{10}|+g_{2}(z)|01\rangle\langle{01}|$$ $$-ig_{1}(z)g_{2}(z)\sqrt{\Gamma_{1}\Gamma_{2}}e^{i(z+k_{0})R}(\sigma_{+}^{(1)}\sigma_{-}^{(2)}+\sigma_{-}^{(2)}\sigma_{+}^{(1)})$$ $$=G_{e1}(z)|10\rangle\langle{10}|+G_{e2}(z)|01\rangle\langle{01}|+G_{o}(\sigma_{+}^{(1)}\sigma_{-}^{(2)}+\sigma_{-}^{(2)}\sigma_{+}^{(1)}), $$ where \begin{align} G_{ej}(z)=\frac{g_{j}(z)}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}, \quad G_{o}(z)=\frac{-ig_{1}(z)g_{2}(z)\sqrt{\Gamma_{1}\Gamma_{2}}e^{i(z+k_{0})R}}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}, \end{align} and $g_{j}(z)=(z-\Delta_{j}+i\Gamma_{j})^{-1}$ are the single-quit green’s functions (note that their Laplace transform is trivial $e^{-i\Delta_{j}t}e^{-\Gamma_{j}t}$ due to locality (like you’ve learned in high school-an exponential decay)). The complete answer is a bit involved but can be clearly expressed in terms of Fourier images of $G_{ej}(z), \ G_{o}(z)$, these are \begin{align} G_{e1}(t)&=\oint_{C_{+\eta}}\frac{dz}{2\pi i}\frac{g_{1}(z)}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}e^{-izt}\\ &=e^{-i\Delta_{1}t}e^{-\Gamma_{1}t}+\sum_{m=1}^{\infty}(-\Gamma_{1}\Gamma_{2}e^{2ik_{0}R})^{m}I(2m, m+1, m, t),\\ G_{e2}(t)&=\oint_{C_{+\eta}}\frac{dz}{2\pi i}\frac{g_{2}(z)}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}e^{-izt}\\ &=e^{-i\Delta_{2}t}e^{-\Gamma_{2}t}+\sum_{m=1}^{\infty}(-\Gamma_{1}\Gamma_{2}e^{2ik_{0}R})^{m}I(2m, m, m+1, t),\\ G_{o}(t)&=\oint_{C_{+\eta}}\frac{dz}{2\pi i}\frac{-ig_{1}(z)g_{2}(z)\sqrt{\Gamma_{1}\Gamma_{2}}e^{i(z+k_{0})R}}{1+g_{1}(z)g_{2}(z)\Gamma_{1}\Gamma_{2}e^{2i(z+k_{0})R}}e^{-izt}\\ =&-i\sqrt{\Gamma_{1}\Gamma_{2}}e^{ik_{0}R}\sum_{m=0}^{\infty}(-\Gamma_{1}\Gamma_{2}e^{2ik_{0}R})^{m}I(2m+1, m+1, m+1, t) \end{align} where \begin{align} &I(a, b, c, t)=\oint_{C_{+\eta}}\frac{dz}{2\pi i}e^{-iz(t-aR)}g_{1}^{b}(z)g_{2}^{c}(z) =\int_{C_{+}}\frac{dz}{2\pi{i}}\frac{e^{-iz(t-aR)}}{(z-\Delta_{1}+i\Gamma_{1})^{b}(z-\Delta_{2}+i\Gamma_{2})^{c}}\\ &=\Theta(t-aR)\Bigg(\sum_{k=0}^{b-1}\frac{(-1)^{k}(c+k-1)!(-i(t-aR))^{b-k-1}}{k!(b-k-1)!(c-1)!}\frac{e^{-i\Delta_{1}(t-aR)}e^{-\Gamma_{1}(t-aR)}}{(\Delta_{2}-\Delta_{1}+i(\Gamma_{2}-\Gamma_{1}))^{c+k}}\\ &+\sum_{k=0}^{c-1}\frac{(-1)^{k}(b+k-1)!(-i(t-aR))^{c-k-1}}{k!(c-k-1)!(b-1)!}\frac{e^{-i\Delta_{2}(t-aR)}e^{-\Gamma_{2}(t-aR)}}{(\Delta_{1}-\Delta_{2}+i(\Gamma_{1}-\Gamma_{2}))^{b+k}}\Bigg). \end{align} So far so good. These functions are looking physically correct, etc. You can see an infinite number of revival peaks in survival probability which happen on all integer multiples of delay time $vR$, i.e. when the photon emitted by one atom reaches the other, isn’t it cool you can control what atom does by just moving it around?=) Many physicists though are recently discussing the possibility of so called DARK states, the states for which $p(t)=1, \ \forall t>0$. Their argument is based on Markov approximation though, it is the limit where $\max{\Gamma_{n}}_{n=1, \ 2}\times vR\ll1$. The first assumption to be done is that the quits are equivalent and “bright” (no detuning) $\Delta_{1}=\Delta_{2}=0, \ \Gamma_{n}=\Gamma$. The second is $\theta=\pi/4, \mod 2\pi, \varphi=4\pi, \mod 2\pi$ for bright and $\theta=\pi/4, \mod 2\pi, \varphi=2\pi, \mod 2\pi$ for dark states. My analysis shows \begin{align} \label{eq: DE58} p_{B}(t)=&\Bigg{|}G_{e}(t)+G_{o}(t)\Bigg{|}^{2}=\Bigg{|}\sum_{n=0}^{\infty}\Theta(t-nR)\frac{(-\Gamma(t-nR))^{n}}{n!}e^{-\Gamma(t-nR)}\Bigg{|}^{2},\\ \label{eq: DE59} p_{D}(t)=&\Bigg{|}G_{e}(t)-G_{o}(t)\Bigg{|}^{2}=\Bigg{|}\sum_{n=0}^{\infty}\Theta(t-nR)\frac{(\Gamma(t-nR))^{n}}{n!}e^{-\Gamma(t-nR)}\Bigg{|}^{2}. \end{align} I.e. for the dark state we obtain $p_{D}(t)=1, \forall t>0$ at $R=0$. For $R\neq0$ when $t\rightarrow \infty$ the sereies converges to a finite value $$p(t)\rightarrow\text{constant}.$$ By numerical exercise I’d like to claim that the $t$ infinity limit of $p_{D}(t)$ is precisely $1/(1+\Gamma R)^{2}$. Can you help me to prove or disprove that? And if you happen to know the closed form the above series do not hesitate to share. By the long-time limit I understand you take above series with all $\Theta$ equal to $1$ (do not bother about how rigorous this is please). To be specific: if you happen to know something about series like $x^{n}(c-nx)^n/n!$ give us a clue. Another clue from numerics: at super large times $\Gamma t\gg1$ we deduce that the quantum decay of bright state follows a sub-exponential power law $1/t$ in the long time regime. Numerics show the $R/(2\pi t)$ law, see picture.

  • How to straighten a parabola?
    by sam wolfe on July 28, 2021 at 5:43 pm

    Consider the function $f(x)=a_0x^2$ for some $a_0\in \mathbb{R}^+$. Take $x_0\in\mathbb{R}^+$ so that the arc length $L$ between $(0,0)$ and $(x_0,f(x_0))$ is fixed. Given a different arbitrary $a_1$, how does one find the point $(x_1,y_1)$ so that the arc length is the same? Schematically, In other words, I’m looking for a function $g:\mathbb{R}^3\to\mathbb{R}$, $g(a_0,a_1,x_0)$, that takes an initial fixed quadratic coefficient $a_0$ and point and returns the corresponding point after “straightening” via the new coefficient $a_1$, keeping the arc length with respect to $(0,0)$. Note that the $y$ coordinates are simply given by $y_0=f(x_0)$ and $y_1=a_1x_1^2$. Any ideas? My approach: Knowing that the arc length is given by $$ L=\int_0^{x_0}\sqrt{1+(f'(x))^2}\,dx=\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx $$ we can use the conservation of $L$ to write $$ \int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx=\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx $$ which we solve for $x_1$. This works, but it is not very fast computationally and can only be done numerically (I think), since $$ \int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx=\frac{1}{4a_1}\left(2a_1x_1\sqrt{1+(a_1x_1)^2}+\arcsin{(2a_1x_1)}\right) $$ Any ideas on how to do this more efficiently? Perhaps using the tangent lines of the parabola? More generally, for fixed arc lengths, I guess my question really is what are the expressions of the following red curves for fixed arc lengths: Furthermore, could this be determined for any $f$? Edit: Interestingly enough, I found this clip from 3Blue1Brown. The origin point isn’t fixed as in my case, but and I wonder how the animation was made (couldn’t find the original video, only a clip, but here’s the link) For any Mathematica enthusiasts out there, a computational implementation of the straightening effect is also being discussed here, with some applications.

  • The expected distance between two points on a sphere and on a circle
    by Rene Morningstar on July 28, 2021 at 1:37 pm

    Two points are randomly selected on the circle. What is the expected distance between them? And what will be the expected distance between the two points on a sphere? An interesting problem, I had several ideas: we can generate a uniform distribution in an $n$-dimensional cube described around a unit ball, remove points outside the ball from the sample, and obtain a uniform distribution of vectors in the ball. We normalize the vectors – we get on the sphere. And we use Monte Carlo…. Of course, there are a lot of iterations, and the accuracy is low, but for a rough estimate and for checking the exact calculations, it will do well. I also reasoned like this: since the task does not change when rotating, I can take one point fixed. We get the expectation of the distance from a random point of the circle to a fixed one. This is an obvious integral: $$ \frac{1}{2 \pi} \int_{-\pi}^{\pi} \sqrt{(1-\cos x)^{2}+\sin ^{2} x}\,dx=\frac{1}{\pi} \int_{-\pi}^{\pi}\left|\sin \frac{x}{2}\right|dx=\frac{2}{\pi} \int_{0}^{\pi} \sin \frac{x}{2}\,dx=\frac{4}{\pi}$$ – for the unit circle, of course. $$ \begin{aligned} &\frac{1}{4 \pi} \int_{0}^{\pi} \int_{-\pi}^{\pi} \sin \theta \sqrt{(1-\cos \theta)^{2}+\sin ^{2} \theta \cos ^{2} \varphi+\sin ^{2} \theta \sin ^{2} \varphi}\, d \varphi d \theta= \\ &=\frac{1}{2} \int_{0}^{\pi} 2 \sin \frac{\theta}{2} \sin \theta\, d \theta=\frac{1}{2} \int_{0}^{\pi} \cos \frac{\theta}{2}-\cos \frac{3 \theta}{2} d \theta=\left.\left(\sin \frac{\theta}{2}-\frac{1}{3} \sin \frac{3 \theta}{2}\right)\right|_{0} ^{\pi}=\frac{4}{3} \end{aligned}.$$ -and this is for the sphere. Parameterization of the sphere: $z=\cos(\theta)$, $x=\sin(\theta)\cos(φ)$, $y=\sin(\theta)\sin(φ)$. As a fixed point we take $(0,0,1)$. Jacobian $\sin(\theta)$. I took a slightly non-standard parameterization relative to theta, so that later I would not mess with things like $\sin (\pi/4-\theta/2)$. Here are my thoughts. I ask you to double-check me and, if possible, write your own version.

  • Are all sine functions odd?
    by mn12 on July 28, 2021 at 7:11 am

    If I have a function like : $f(x) = \sin(e^{x/2} + e^{-x/2})$ or something equally complicated, do I actually need to work out if $f(-x) = -f(x)$, or are all sine functions odd no matter what it is a function of and it is just a matter of proving this using trig identities? thanks!

  • On the ‘wrong proof’ of the chain rule
    by Mathematician 42 on July 27, 2021 at 10:21 am

    I am looking through an old analysis course that I had and I was pondering a bit about the proof of chain rule (especially the notorious wrong proof that you can give). I’d be happy if someone was willing to verify my reasoning below. I end with an actual question. Let’s start with the following nice result. Let $f\colon \mathbb{R}\to \mathbb{R}$ be a continuous function which is differentiable on $\mathbb{R}_0$. Assume that $\lim_{x\to 0}f'(x)=L\in \mathbb{R}$. Then $f$ is differentiable in $0$. Proof: For each $h\neq 0$, the mean value theorem yields a $c_h\in \mathbb{R}$ strictly between $h$ and $0$ such that $f'(c_h)=\frac{f(h)-f(0)}{h}$. Letting $h\to 0$, is it is obvious that $c_h\to 0$ as each $|c_h|<|h|$. Hence $$\lim_{h\to 0}\frac{f(h)-f(0)}{h}=\lim_{h\to 0}f'(c_h)=L.$$ $\square$ Great, let’s apply this to the following function: $$\phi\colon \mathbb{R}\to \mathbb{R}:x\mapsto \begin{cases}x^3\sin(\frac{1}{x}) & \mbox{ if }x\neq 0,\\0 & \mbox{ if } x=0.\end{cases}$$ Clearly $\phi$ is differentiable on $\mathbb{R}_0$ and $$\phi'(x)=3x^2\sin(\frac{1}{x})-x^3\cos(\frac{1}{x})\frac{1}{x^2}=3x^2\sin(\frac{1}{x})-x\cos(\frac{1}{x})$$ for all $x\neq 0$. It is straightforward to see that $\lim_{x\to 0}\phi'(x)=0$ and thus the above result yields that $\phi'(0)=0$ (in particular $\phi$ is differentiable on the whole of $\mathbb{R}$). Now at this point, recall the chain rule. Let $f,g\colon \mathbb{R}\to \mathbb{R}$ be functions. If $a\in\mathbb{R}$ such that $f'(a)$ and $g'(f(a))$ both exist, then $(g\circ f)'(a)=g'(f(a))f'(a)$. The obvious argument to try is the following ‘wrong proof’: \begin{eqnarray} \lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{x-a} &=& \lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{f(x)-f(a)}\cdot \frac{f(x)-f(a)}{x-a}\\ &=& \lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{f(x)-f(a)}\cdot \lim_{x\to a}\frac{f(x)-f(a)}{x-a}\\ &=& g'(f(a))f'(a). \end{eqnarray} Here we used that $f$ is continuous in $a$ to see that $f(x)\to f(a)$ as $x\to a$. $\triangle$ However, there is an obvious error in the above reasoning. If for example $f$ is a constant function $f(x)=f(a)$ for all $x\in \mathbb{R}$, then $\lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{f(x)-f(a)}=\lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{0}$ is nonsensical! Having said that, it is also clear that the above proof does work for functions such that $\exists \delta>0:\forall x\in (a-\delta,a+\delta)\setminus \{a\}:f(x)\neq f(a)$. In that case, $f(x)$ does not equal $f(a)$ for $x$ near $a$ (and $x\neq a$). So the above proof only fails for a particular type of function, the easiest of which are constant functions. However, for a constant function $f$, one can calculate $(g\circ f’)(a)$ directly and show that it’s $0$. A natural question at this point is to wonder whether there exists a nonconstant function $f$ such that $f$ is differentiable in $a$ and $f(x)=f(a)$ infinitely often for $x$ near $a$. The answer is yes and the function $\phi$ given in the example above (with $a=0$) satisfies these properties. (Also, the wikipedia page of the chain rule gives the function $f(x)=x^2\sin(\frac{1}{x})$ for $x\neq 0$ and $f(0)=0$ as an example, but this function is not differentiable in $0$. As far as I can tell, this is a worse example than just a constant function to pinpoint the failure of the ‘wrong proof’. Perhaps this should be changed?) In general let $f$ be such a function (thus $\forall \delta>0:\exists x\neq a: |x-a|<\delta$ and $f(x)=f(a)$). If $\lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{x-a}$ exists, then we can compute this limit by choosing an appropriate sequence $x_n\to a$. For each $n\geq 1$, there exists an $x_n\neq a$ such that $|x_n-a|<\frac{1}{n}$ and $f(x_n)=f(a)$. It follows that \begin{eqnarray} \lim_{x\to a}\frac{g\circ f(x)-g\circ f(a)}{x-a}&=&\lim_{n\to \infty}\frac{g\circ f(x_n)-g\circ f(a)}{x_n-a}\\ &=& \lim_{n\to \infty}\frac{g\circ f(a)-g\circ f(a)}{x-a}\\ &=& 0. \end{eqnarray} This shows that if $f$ is a function for which the ‘wrong proof’ of the chain rule fails, then $(g\circ f)'(a)=0$. Off course, I was only able to show this under the assumption that $(g\circ f)'(a)$ actually exists (which off course is true as one can actually prove the chain rule). Nonetheless, this begs the question whether there is a more direct way of showing that $(g\circ f)'(a)$ actually exists (and equals zero) if $f$ is a function for which the ‘wrong proof’ fails. If so, one can actually fix this ‘wrong proof’ by considering two cases.

  • Prove $a^2 + b^2 + c^2 + ab + bc +ca \ge 6$ given $a+b+c = 3$ for $a,b,c$ non-negative real.
    by sku on July 27, 2021 at 2:25 am

    I want to solve this problem using only the AM-GM inequality. Can someone give me the softest possible hint? Thanks. Useless fact: from equality we can conclude $abc \le 1$. Attempt 1: Adding $(ab + bc + ca)$ to both sides of inequality and using the equality leaves me to prove: $ab + bc + ca \le 3$. Final edit: I found a easy way to prove above. $18 = 2(a+b+c)^2 = (a^2 + b^2) + (b^2 + c^2) + (c^2 + a^2) + 4ab + 4bc + 4ca \ge 6(ab + bc + ca) \implies ab + bc + ca \le 3$ (please let me know if there is a mistake in above). Attempt 2: multiplying both sides of inequality by $2$, we get: $(a+b)^2 + (b+c)^2 + (c+a)^2 \ge 12$. By substituting $x = a+b, y = b+c, z = c+a$ and using $x+y+z = 6$ we will need to show: $x^2 + y^2 + z^2 \ge 12$. This doesnt seem trivial either based on am-gm. Edit: This becomes trivial by C-S. $(a+b).1 + (b+c).1 + (c+a).1 = 6 \Rightarrow \sqrt{((a+b)^2 + (b+c)^2 + (c+a)^2)(1 + 1 + 1)} \ge 6 \implies (a+b)^2 + (b+c)^2 + (c+a)^2 \ge 12$ Attempt 3: $x = 1-t-u$, $y = 1+t-u$, $z = 1 + 2u$ $(1-u-t)^2 + (1-u+t)^2 + (1+2u)^2 + (1-u-t)(1-u+t) + (1+t-u)(1+2u) + (1-t-u)(1+2u)$ $ = 2(1-u)^2 + 2t^2 + (1 + 2u)^2 + (1-u)^2 – t^2 + 2(1+2u)$ expanding we get: $ = 3(1 + u^2 -2u) + t^2 + 1 + 4u^2 + 4u + 2 + 4u = 6 + 7u^2 + t^2 + 2u\ge 6$. Yes, this works.. (not using am-gm or any such thing).

  • Why do we always need the Schwarz lemma when bounding the trace of a Kähler metric?
    by Geometer in the making on July 26, 2021 at 11:35 pm

    My undergraduate thesis topic is Kähler geometry. The general direction is something like the Calabi-Yau theorem or more adventurously some singular Calabi-Yau theorem, but this is not certain yet. One thing that I am noticing a lot of in my reading of Kähler geometry is that if we have two Kähler metrics $\omega$, $\eta$, then to get a bound of the form $$\text{tr}_{\omega}(\eta) \leq C$$ we need to use the Schwarz lemma — Essentially, we apply the maximum principle to some term like $$\log \text{tr}_{\omega}(\eta) – A \varphi,$$ where $\omega = \eta + dd^c \varphi$ and $A>0$ is large. This requires an assumption on the (Ricci/bisectional/holomorphic sectional) curvatures of $\omega$, $\eta$ (depending on which Laplacian one computes with). I feel that I understand how to use the Schwarz lemma to get these estimates, but I want to ask why we have to use it (if we have to?). This is prompted by studying singular metrics, for examples cone and cusp metrics: To formulate my question, let $D$ be a divisor in a compact Kähler manifold $M$, and for simplicity, assume that $D$ has simple normal crossings. A cone Kähler metric is a Kähler metric which is smooth on $M – D$ and is quasi-isometric to $$\frac{i}{2} \sum_{j=1}^k | z_j |^{2(1-\beta_j)} dz_j \wedge d\overline{z}_j + \frac{i}{2} \sum_{j \geq k+1} dz_j \wedge d\overline{z}_j.$$ A cusp Kähler metric is a smooth Kähler metric on $M-D$ which is quasi-isometric to $$\frac{i}{2} \sum_{j=1}^k | z_j |^{-2}| \log | z_i |^2 |^2 dz_j \wedge d\overline{z}_j + \frac{i}{2} \sum_{j \geq k+1} dz_j \wedge d\overline{z}_j.$$ From these descriptions, can one not see immediately that if $\omega$ is cusp and $\eta$ is cone, then $$\text{tr}_{\omega}(\eta) \leq C | z_i|^2 | \log | z_i |^2|^2,$$ which would give $$\text{tr}_{\omega}(\eta) \leq C \prod_j | \sigma_j |^2 | \log | \sigma_j |^2 |^2,$$ if $\sigma_j$ are the defining sections for the divisor $D$? What initially came to my mind is a coordinate dependence problem, but this seems to contradict the fact that many calculations of this type involve normal coordinate calculations. Sorry if this question is silly.

  • What is the correct way to think about quotient sets and equivalence relations?
    by user324789 on July 26, 2021 at 5:51 pm

    Perhaps there is not a correct way to think about it but I would want to know how others think about it. Here are my problems/questions, after my definitions: Definition 1. Let $X$ be a set and $\sim$ be an equivalence relation on $X$. Then $[x]:=\{y \in X \mid y \sim x\}$ and $X/{\sim} := \{[x] \mid x \in X\}$. My question could be summarized to “How should I think about $X/{\sim}$?”. Consider $\mathbf{Z}/{\sim}$ with $z_1 \sim z_2$ $\iff$ $z_1-z_2$ is even. One then obtains $\mathbf{Z}/{\sim} = \{[0],[1]\}=\{\{…,-4,-2,0,2,4,…\},\{…,-5,-3,-1,1,3,5,…\}\}.$ The way I think about the set of all equivalence classes is that one collects all equivalent elements into one set for all elements and obtains the set on the very right in the example. Then one picks a “name” for each of those sets, calling it by one of its members. In the example one has the canonical choices of $[0],[1]$. If I now pick an arbitrary element $a \in \mathbf{Z}/{\sim}$, then there exists a $z \in \mathbf{Z}$ such that $a=[z]$. This is because I can simply call the set $a$ by one of its representatives, in this case $z$ or in the example above $[0]$ or $[1]$. When defining a function it then suffices to define it on all the “names” $[z]$ because I can give each object in $\mathbf{Z}/{\sim}$ one. The function being well defined then comes down to showing that it is independent of the name each object has been given. Is this a valid way to think about this concept or are there other, perhaps better ways to do so? I am not sure if I am satisfied with the way I would explain it to myself since the “giving it a name” does not really sound that rigorous. I guess one could also view this as a sort of assignment which assigns to every set of equivalent elements a member of it (which is not well defined) and then assigns to it a value such that this process is well defined. Edit: The following is still not entirely clear to me. When defining a function from a quotient set to another set, one usually defines this in the following way: $$f: X/{\sim} \to A, \ [x] \mapsto a(x).$$ How should I think about this? Do I first choose a (arbitrary) complete system of representatives, define this function for them and then show that it is not dependent of the choice of the complete system, or do I map all $[x]$, $ x \in X$ and then realize that the images of equivalent elements are the same, meaning that the function is well defined?

  • How many words of length $k$ are there such that no symbol in the alphabet $\Sigma$ occur exactly once?
    by Sten on July 26, 2021 at 12:12 pm

    Introduction Given an alphabet $\Sigma$ of size $s$, I want to find a way of counting words $w$ of length $k$ which obey the rule: No symbol occurs exactly once in $w$. We’ll call this number $Q^s_k$. I am particularly interested in closed-form expressions, for $Q_k^s$ or at least expressions that are fairly easy to calculate when the number of symbols is moderately large (say $s \sim 50$). I’m not particularly up to speed in this area of maths, but I’ve tried a couple of different things. I’ll list them below, and end with my questions on how to move on. Deterministic Finite Automata The language I’ve described above is regular, so it’s possible to construct a discrete finite automaton describing it. Here’s what that looks like for an alphabet of two symbols The blue and green arrows correspond to inputs of the two different types of symbols. The accepting states of the DFA are 02, 20 and 22. The number of accepted words of length $k$ is then the number of paths of length $k$ from the initial state to an accepting state. From this cs stackexchange question I’ve found that once you have the transition matrix of the DFA, the problem boils down to calculating powers of the transition matrix, and then looking at particular rows and columns. Unfortunately, there are $3^s$ states in the DFA (for each symbol, we can have encountered it 0, 1 or more than 1 times), and $s\times 3^s$ nonzero transitions between them. Combinatorial, approach My second approach to this problem was to try and find a recursive expression for $Q^s_k$. If we restrict the alphabet to one symbol, i.e. $s = 1$, then if $k \neq 1$, there’s exactly one valid word, otherwise there are none. Note that by definition the empty word is valid. If we extend the alphabet to two words, we can write the new expression in terms of $Q^1$, giving $Q^2_k = \sum_{m=0; m\neq1}^k Q^1_{k-m} C(k, m)$. The idea is that we can form valid words with two symbols by taking $m$ instances of the new symbol and inserting into the valid words of length $k – m$ with one symbol. We don’t use $m = 1$ in the sum since that would not give a valid word, and the case $k -m$ = 1 is taken care of by the fact that $Q^1_1 = 0$. The combinatorial factor accounts for the fact that we have to select $m$ slots in the final string for the occurrences of the new symbol, and all permutations of those slots are equivalent. In fact, this approach generalises neatly to the recursive expression \begin{equation} \tag{*}\label{eq:combinatorics} Q^{s+1}_k = \sum_{m=0; m\neq1}^k Q^{s}_{k-m} C(k, m), \end{equation} since the logic for what happens when we add a new symbol to an existing alphabet is exactly the same. Unfortunately, that’s about where I got stuck. Inclusion exclusion As an alternative to trying to count all the valid words, we could take the perspective that the number of valid words is the total number of words minus the number of invalid words. For example, the number of valid words of length $k$ with an alphabet of two words is $Q^2_k = 2^k – 2k + 2\delta_{k,2}$. The first term on the RHS is the total number of words of length $k$. The next term accounts for the fact that all words with one occurrence of one symbol and $k-1$ of the other are invalid; there are two symbols and $k$ slots to fit them into, giving $2k$. The last term accounts for the double counting that occurs when we exclude words where both the first and the second symbol occur exactly once. In general we start with the $s^k$ possible words, and subtract all words where a given symbol only occurs once. For each symbol there are $k (s-1)^{k-1}$ such words, since there are $k$ slots for the symbol of interest, and the remaining slots must be filled from an alphabet consisting of all the other symbols. This gives $s\cdot k\cdot (s-1)^{k-1}$ words where a symbol occurs only once. However, this double counts all the cases where two symbols occur only once. There are $C(s, 2)$ pairs of symbols in the alphabet, and there are $P(k, 2)$ possible permutations of these symbols in a word of length $k$. The remaining $(k – 2)$ symbols are chosen from an alphabet of length $s – 2$, giving $(s-2)^{k-2}C(s, 2) P(s, 2)$ words of this form. Continuing, we arrive at the general expression \begin{equation} \tag{**}\label{eq:inclusion} Q^{s}_k = \sum_{i=0}^{\mathrm{min}(s, k)} (-1)^{i}(s – i)^{k-i} C(s, i) P(k, i) \end{equation} Note that for this expression to be true, we must consider $0^0 = 1$. Again, I was unable to simplify this much further. Questions Constructing the giant DFA feels like an unsatisfactory approach, because it completely ignores the symmetries in the problem. If the DFA is in the initial state “$000\ldots$”, then any symbol it encounters is in some sense equivalent. Is there some clever way of using the symmetries of the problem to reduce the size of the DFA? Can either (or both) of the expressions $\eqref{eq:combinatorics}$ and $\eqref{eq:inclusion}$ be simplified further? Is there a nice algebraic argument for why $\eqref{eq:combinatorics}$ and $\eqref{eq:inclusion}$ are equal? I know they must be, since they count the same thing, but I don’t see any simple way of showing it

  • Vanishing differential forms in cohomology
    by diracula on July 26, 2021 at 9:33 am

    Let $X$ be a smooth differentiable manifold. Consider on $X$ a closed $p$-form $\eta$ and a closed $q$-form $\omega$, which have associated cohomology classes $[\eta] \in H^p(X)$ and $[\omega] \in H^q(X)$. Now assume their wedge product is zero in cohomology $[ \eta \wedge \omega ] = 0 \in H^{p+q}(X)$. My question is: Is it always possible to find cohomologically equivalent elements $\eta’ \in [\eta]$ and $\omega’ \in [\omega]$ such that $\eta’ \wedge \omega’ = 0$ (i.e. such that the wedge product is genuinely zero, not only in cohomology)? Naively one needs to determine whether the exact form $\mathrm{d}\xi$ making $\eta \wedge \omega + \mathrm{d}\xi = 0$ can always be written in the form $(\eta + \mathrm{d}\alpha) \wedge (\omega + \mathrm{d}\beta) – \eta \wedge \omega$ for some $\alpha$ and $\beta$. But this seems a difficult question, so I am wondering if there is a better argument.

  • Prove:$\int_{0}^{\infty} x^9K_0(x)^4\text{d}x =\frac{42777\zeta(3)-51110}{2048}$ [closed]
    by Sakup2485 on July 26, 2021 at 7:35 am

    Wolfram Alpha says: $$ \int_{0}^{\infty} xK_0(x)^4\text{d}x =\frac{7\zeta(3)}{8} $$ Where $$K_0(x) =\int_{0}^{\infty} e^{-x\cosh z}\text{d}z $$ And I proved it by using Mellin transform. But I also found(guess): $$ \begin{aligned} &\int_{0}^{\infty}x^3K_0(x)^4\text{d}x =\frac{7\zeta(3)-6}{32} \\ &\int_{0}^{\infty}x^5K_0(x)^4\text{d}x =\frac{49\zeta(3)-54}{128}\\ &\int_{0}^{\infty}x^7K_0(x)^4\text{d}x =\frac{1008\zeta(3)-1184}{512}\\ &\int_{0}^{\infty} x^9K_0(x)^4\text{d}x =\frac{42777\zeta(3)-51110}{2048} \end{aligned} $$ How to prove them?The Mellin transform doesn’t work on $x^3,x^5,x^7…$.Any help be appreciated. Please see:Hypergeometric Forms for Ising-Class Integrals

  • Why does “Turn! Turn! Turn!” equal 241217.524881?
    by tparker on July 26, 2021 at 12:22 am

    If you search for “Turn! Turn! Turn!” on Google, then the second result is this YouTube video of The Byrds performing the Pete Seeger song of that name. But the first result is Google’s internal calculator displaying “241217.524881”. With a bit of experimentation, it appears that this number is a numerical approximation to $$\frac{\Gamma(2\pi+1)^2}{2 \pi},$$ where $\Gamma$ represents the Euler gamma function. I sort of understand why Google is interpreting “Turn” to mean $2\pi$, and the exclamation mark to mean $x! := \Gamma(x+1)$, as this is a relatively common (although not universal) choice of interpolation of the factorial function to the real numbers. But in that case, I would expect Google to interpret “Turn! Turn! Turn!” to represent $\Gamma(2\pi+1)^3 \approx 18\, 658\, 774\, 329$ instead of the expression above. Why isn’t it? A possible partial solution: if you search “Turn! Turn” then you get the expected result $7735.248 \approx \Gamma(2\pi+1) 2\pi$. But if you search “Turn! Turn!” then you do not get the expected result $\Gamma(2\pi+1)^2 \approx 1\, 515\, 614$. Instead, you get 195.936, which appears to be the numerical approximation of $\Gamma(2\pi+1)/(2\pi)$. Moreover, Google reparses the input as “Turn ! (Turn !)”. To me, this suggests that it’s interpreting the second explamation mark as a factorial symbol, but the first exclamation mark to mean $a ! b := b/a$, i.e. division but with the usual order of arguments reversed. This explains the orginal result if Google is interpreting “Turn! Turn! Turn!” with the first exclamation mark representing reversed division (with a lower order-of-operations precedence than multiplication) but the second two exclamation marks representing factorial: $$2\pi “!” (((2\pi)!)\ ((2\pi)!)) = \frac{\Gamma(2\pi+1)^2}{2\pi}.$$ Is this notation $a!b := b/a$ standard? I’ve never seen it before. Can anyone explain how Google is parsing this string? (This is one of those awkward questions where the (unknown) solution determines whether or not the question is on-topic for Math Stack Exchange. If the solution does indeed come down to unusual math notation, as I suspect, then the question is on-topic for Math SE. But if the resolution is just some black-box machine learning magic, then maybe the question isn’t on topic. I’m not quite sure what one does in this kind of situation.)

  • Proving $\sum_{i=1}^n\sum_{j=1}^n\sqrt{|x_i-x_j|}\le \sum_{i=1}^n\sum_{j=1}^n\sqrt{|x_i+x_j|}$.
    by hamam_Abdallah on July 25, 2021 at 10:44 pm

    IMO 2021, Problem 2. Let $ n $ be an integer $ \ge 2$ and $x_1,x_2,…,x_n $ be $ n$ reals. prove that $$\sum_{i=1}^n\sum_{j=1}^n\sqrt{|x_i-x_j|}\le \sum_{i=1}^n\sum_{j=1}^n\sqrt{|x_i+x_j|}$$ I wrote the left sum as $$2\sum_{i=2}^n\sum_{j=1}^{i-1}\sqrt{|x_i-x_j|}$$ and the right one as $$2\sum_{i=2}^n\sum_{j=1}^{i-1}\sqrt{|x_i+x_j|}+\sum_{i=1}^n\sqrt{2|x_i|}$$ but, it seems this is not a good starting point.

  • How to rewrite $\int\limits_A^B \frac{x^n \exp(-\alpha x)}{\small(x + \beta\small)^m} \, dx$?
    by Thai-Hoc on July 25, 2021 at 5:58 pm

    Currently, I am a post-graduate researcher in Telecommunications. During the process of evaluating the transmission error probability, I need to evaluate the following integral $I = \int\limits_A^B \frac{x^n \exp(-\alpha x)}{\small(x + \beta\small)^m} \, dx$? How to rewrite this improper integral in terms of special function (for example $Ei(x)$, Bessel,…)? Notice that $A, B, \alpha,\beta > 0$ (positive real number) and $m,n$ are two positive integers. I have tried to compute this integral with different values of $A, B, \alpha,\beta > 0$ and $m,n$ by using Wolfram Mathematica. It seems that the results of this integral have a form of the exponential integral function $Ei\left( x \right) = \int\limits_{t = – x}^\infty {\frac{{{e^{ – t}}}}{t}dt} = \int\limits_{t = – \infty }^{t = x} {\frac{{{e^t}}}{t}dt}$ as: $I = C_1\bigg[C_2 + C_3\big[ {\rm Ei}\big(- \alpha(\beta+ A)\big) – {\rm Ei}\big(- \alpha(\beta+ B)\big) \big] \bigg]$. Are there any way to find out the correct values of $C_1$, $C_2$, and $C_3$. Thank you for your enthusiasm!

  • Markov chains origins and how is Christianity involved
    by John Cataldo on July 25, 2021 at 4:52 pm

    In a book called Advanced Data Analysis from an Elementary Point of View by Cosma Rohilla Shalizi, page 405, the first instance of “Markov process” is accompanied by a footnote which reads After the Russian mathematician A. A. Markov, who introduced the theory of Markov processes in the course of a mathematical dispute with his arch-nemesis, to show that probability and statistics could apply to dependent events, and hence that Christianity was not necessarily true (I am not making this up: Basharin et al., 2004). I found it curious, how could religion have anything to do with the fact that the law of large numbers can be extended to non iid variables (because that is what Nekrasov, Markov’s “arch-nemesis”, was wrong about, and that argument is at the origin of the chains. They are a counterexample of Nekrasov false claim that independence is necessary for a law of large numbers). But I did not find the answer is the references math/history paper by Basharin et al. Why would the following be true: [Law of large numbers holds $\implies$ independence] $\implies$ Christianity is true And what do they mean by Christianity being true or not?

  • I don’t know how to exactly compute this determinant
    by KPRS on July 25, 2021 at 3:26 pm

    I’ve tried to compute this determinant by row transformations and column transformations, but it gives me a formula that doesn’t work. The determinant is: \begin{vmatrix} x & a & b & c & d\\ a & x & b & c & d\\ a & b & x & c & d\\ a & b & c & x & d\\ a & b & c & d & x \end{vmatrix} I thought I could start doing row 5 – row 4, row 4 – row 3, row 3 – row 2 and row 2 – row 1 and then you get this determinant: \begin{vmatrix} x & a & b & c & d\\ a-x & x-a & 0 & 0 & 0\\ 0 & b-x & x-b & 0 & 0\\ 0 & 0 & c-x & x-c & 0\\ 0 & 0 & 0 & d-x & x-d \end{vmatrix} Then I made column 1 – column 2, column 2 – column 3, column 3 – column 4, column 4 – column 5 and you get: \begin{vmatrix} x-a & a-b & b-c & c-d & d\\ 0 & x-a & 0 & 0 & 0\\ 0 & 0 & x-b & 0 & 0\\ 0 & 0 & 0 & x-c & 0\\ 0 & 0 & 0 & 0 & x-d \end{vmatrix} And, as it is triangular, you can multiply the diagonal elements, so you get that the determinant is: $(x-a)^2(x-b)(x-c)(x-d)$ But this isn’t correct and I don’t know what to do, could someone please help me? I’d really appreciate.

  • Solve: $2^{\cos^{2014}x} – 2^{\sin^{2014} x} = \cos^{2013} (2x)$
    by andu eu on July 25, 2021 at 2:14 pm

    I have encountered this in a Romanian Mathematical magazine at the 10th grade section (so using more advanced things like Calculus shouldn’t be necessary). Solve: $$2^{\cos^{2014} x} – 2^{\sin^{2014} x} = \cos^{2013}(2x)$$ My first approach was to solve it in the interval $[-2\pi, 2\pi)$ and I tried dividing it into different subintervals to try and work with increasing/decreasing functions. This however fails as there are some intervals in which both side have the same property. Then, I tried working with inequalities, particulary the AM-GM Inequality, but I could not solve it either. Have you got any clues?

  • Is this matrix positive semidefinite? $M_{ij} = \sqrt{|x_i+x_j|} – \sqrt{|x_i-x_j|}$ where $x_i$’s are reals
    by Josh Bolton on July 25, 2021 at 2:46 am

    Let finite number of $x_i$’s be reals. Define matrix $M_{ij} = \sqrt{|x_i+x_j|} – \sqrt{|x_i-x_j|}$. Is this matrix positive semidefinite? I am reading this year’s IMO problem number 2, which would be trivially true if we prove that $M_{ij}$ is positive semidefinite. Problem $\boldsymbol2$. Show that the inequality $$\sum_{i=1}^{n}\sum_{j=1}^{n}\sqrt{\left|x_i – x_j\right|} \leqslant \sum_{i=1}^{n}\sum_{j=1}^n\sqrt{\left|x_i + x_j\right|}$$ holds for all real numbers $x_1,…,x_n$.

  • Let $G$ be a group of order $pq$, where $p$, $q$ are distinct primes and $p
    by Adam French on July 25, 2021 at 2:12 am

    This is an exercise in Serge Lang’s Algebra in the first chapter. I am wondering why q $\not\equiv$ 1 mod $p$ is assumed considering it is unnecessary. Indeed, if that is excluded from the requirements, then let $H_q$ and $H_p$ be the Sylow subgroups of orders $q$ and $p$, respectively. Then they are cyclic and thus have trivial intersection. Since they have trivial intersection, the product of groups $H_qH_p$(which is a group since $H_q$ is normal) is isomorphic to $H_q$$\times$$H_p$ which has order $pq$ and so it is equal to $G$. Considering $p$ and $q$ are coprime, $G$ is cyclic. Is this solution correct/an acceptable answer to this problem? If so, why is the aforementioned requirement provided? Note that all of the information used in my proof is either in the exercises preceding this one or in the chapter on Sylow subgroups.

  • Is $\sum_{n=1}^{\infty} \frac{\sin(n^2x)}{n}$ uniformly convergent on $(0,\pi)$?
    by Felix Quinque on July 25, 2021 at 12:49 am

    I am trying to prove that the function $\sum_{n=1}^{\infty} \frac{\sin(n^2x)^2}{n^3}$ is not a fractal by showing that it has a well defined derivative (as fractals do not). In order to do that, I have to find out whether the function $\sum_{n=1}^{\infty} \frac{\sin(n^2x)}{n}$ is uniformly convergent on the interval $(0,\pi)$. If it is, the original function is not a fractal! It is clear that using the Weierstrass M-test it can be shown that: $\sum_{n=1}^{\infty} \frac{\sin(n^2x)}{n^\alpha}$ where $\alpha > 1$ is uniformly convergent since $\sum_{n=1}^{\infty} \frac{1}{n^\alpha}$ converges and $|\frac{\sin(n^2x)}{n^\alpha}| \leq \frac{1}{n^\alpha}$. Now the case where $\alpha = 1$ the function $\sum_{n=1}^{\infty} \frac{\sin(nx)}{n}$ (no $n^2$ in the sine) is the fourier trasnform of a sawtooth wave – so it converges uniformly everywhere except for when $x$ is a multiple of $\pi$. I’m not sure if the function I’m investigating (with $n^2$ in the sine) would share a similar property. I have done quite a bit of research and it seems nobody has analysed this specific function yet and I’m a bit unsure as to how I can continue here. I believe that somehow the following substitution might help: $$\sum_{n=1}^{\infty} \frac{\sin(n^2x)}{n} = \sum_{n=1}^{\infty} \frac{1}{2in} (e^{i n^2 x} – e^{-i n^2 x})$$ But I can’t get to any results from here either. It would be amazing if you could give me some pointers as I’m making no progress (I’m a non-math PhD student who is stuck figuring this out) and am wasting ungodly amounts of time on this without a solution in sight. Thanks so much for your help in advance! EDIT: It can be proven that $\sum_{n=1}^{\infty} \frac{\sin(n^2x)}{n}$ is pointwise convergent using Dirichlet’s test fairly easily. <— This is incorrect – there was a mistake in my derivation