Mathematics Stack Exchange News Feeds

  • Remarquable identities $f(n) = \frac{a^n}{(a-b)(a-c)} + \frac{b^n}{(b-a)(b-c)} + \frac{c^n}{(c-a)(c-b)}$
    by Zakhurf on May 18, 2022 at 7:26 pm

    Let $n$ be an integer, and \begin{equation} f(n) = \frac{a^n}{(a-b)(a-c)} + \frac{b^n}{(b-a)(b-c)} + \frac{c^n}{(c-a)(c-b)} \end{equation} \begin{equation} g(n) = \frac{(bc)^n}{(a-b)(a-c)} + \frac{(ac)^n}{(b-a)(b-c)} + \frac{(ab)^n}{(c-a)(c-b)} \end{equation} We have the following impressive identities, for all $a,b,c$, \begin{align} f(0) &= 0 \\ f(1) &= 0 \\ f(2) &= 1 \\ f(3) &= a+b+c \\ f(4) &= a^2 + b^2 + c^2 + ab + ac + bc \\ f(5) &= a^3 + b^3 + c^3 + a^2b + a^2c + b^2c + ab^2 + ac^2 + bc^2 \\ f(6) &= a^4 + b^4 + c^4 + a^3b + a^3c + b^3c + ab^3 + ac^3 + bc^3 + a^2bc + ab^2c + abc^2 +a^2b^2 + a^2c^2 + b^2c^2 \\ \\ g(0) &= 0 \\ g(1) &= 1 \\ g(2) &= ab + ac + bc \\ g(3) &= a^2b^2 + a^2c^2 + b^2c^2 + a^2bc + ab^2c + abc^2 \\ g(4) &= a^3b^3 + a^3c^3 + b^3c^3 + a^3b^2c + a^3bc^2 + a^2b^3c + ab^3c^2 + a^2bc^3 + ab^2c^3 + a^2b^2c^2 \end{align} which I have verified by plugging the expressions into Wolfram Alpha. It seems that the general form should be, for $n > 2$. \begin{align} f(n) &= \sum_{i+j+k = n-2}a^ib^jc^k \\ g(n) &= \sum_{\substack{i+j+k = 2(n-1)\\1\leq i,j,k \leq n-1}}a^ib^jc^k \end{align} The questions are : How to demonstrate the statements for $n$ general using induction. Intuitively, we should use induction, however I do not see the induction step. Could we demonstrate the general case without using induction ? There is a link, between these formulas and Vandermondt matrices (see below), would there be a nice demonstration using matrices ? ======================================================================= I arrived at such identities when working with partial fractions decomposition, and after some related work I realized that the Vandermondt matrices where almost the inverse of the matrices which appear when we do partial fractions decomposition, Then I realised that the Vandermondt matrices have very nice inverse : \begin{equation} \begin{pmatrix} 1&1 \\ a&b \end{pmatrix} \begin{pmatrix} -\frac{b}{(a – b)} & \frac{1}{(a – b)} \\ -\frac{a}{(b – a)} & \frac{1}{(b – a)} \\ \end{pmatrix} = I_{2} \end{equation} \begin{equation} \begin{pmatrix} 1&1&1 \\ a&b&c \\ a^2&b^2&c^2 \end{pmatrix} \begin{pmatrix} \frac{b c}{(a – b) (a – c)} & -\frac{b + c}{(a – b) (a – c)} & \frac{1}{(a – b) (a – c)} \\ \frac{a c}{(b – a) (b – c)} & -\frac{a + c}{(b – a) (b – c)} & \frac{1}{(b – a) (b – c)} \\ \frac{a b}{(c – a) (c – b)} & -\frac{a + b}{(c – a) (c – b)} & \frac{1}{(c – a) (c – b)} \end{pmatrix} = I_{3} \end{equation} \begin{equation} \begin{pmatrix} 1&1&1&1 \\ a&b&c&d \\ a^2&b^2&c^2&d^2 \\ a^3&b^3&c^3&d^3 \end{pmatrix} \begin{pmatrix} -\frac{bcd}{(a – b) (a – c)(a-d)} & \frac{bc + cd + bd}{(a – b) (a – c)(a-d)} &-\frac{b+c+d}{(a – b) (a – c)(a-d)} & \frac{1}{(a – b) (a – c)(a-d)}\\ -\frac{a cd}{(b – a) (b – c)(b-d)} & \frac{ac + ad + cd}{(b – a) (b – c)(b-d)} & -\frac{a + c + d}{(b – a) (b – c)(b-d)}& \frac{1}{(b – a) (b – c)(b-d)}\\ -\frac{a bd}{(c – a) (c – b)(c-d)} & \frac{ab + ad + bd}{(c – a) (c – b)(c-d)} & -\frac{a + b + d}{(c – a) (c – b)(c-d)}&\frac{1}{(c – a) (c – b)(c-d)}\\ -\frac{a bc}{(d – a) (d – b)(d-c)} & \frac{ab + ac + bc}{(d – a) (d – b)(d-c)} & -\frac{a + b + c}{(d – a) (d – b)(d-c)}&\frac{1}{(d – a) (d – b)(d-c)} \end{pmatrix} = I_{4} \end{equation} The identities $f(0),f(1), f(2)$ are the last column of the inverse equation for 3-dim matrices. However, it seems that such matrix argument is not sufficient to prove the case for $n$ general, and that many similar identities (the other places in the matrices) should exist.

  • How to evaluate the sum of $\sum_{n=0}^{\infty}\frac{1}{3n^{2}+4n+1}$
    by Bruh on May 18, 2022 at 5:09 pm

    I hava an infinite sum $$\sum_{n=0}^{\infty}\frac{1}{3n^{2}+4n+1}$$ I factored the denominator $$\sum_{n=0}^{\infty}\frac{1}{\left(3n+1\right)\left(n+1\right)}$$ Then I separated the fraction $$\frac{1}{2}\sum_{n=0}^{\infty}\frac{3}{\left(3n+1\right)}-\frac{1}{\left(n+1\right)}$$ Then I set 1 (the numerator) to be equal to x to some power which I don’t know if I can do $$\frac{3}{2}\sum_{n=0}^{\infty}\frac{x^{3n+1}}{3n+1}-\frac{1}{2}\sum_{n=0}^{\infty}\frac{x^{n+1}}{n+1}$$ Then I set the integral which would satisfy the previous terms $$\frac{3}{2}\sum_{n=0}^{\infty}\int_{0}^{1}x^{3n}dx$$ and $$-\frac{1}{2}\sum_{n=0}^{\infty}\int_{0}^{1}x^{n}dx$$ Then I changed the order of summation and integration and I got $$\frac{3}{2}\int_{0}^{1}\frac{1}{1-x^{3}}dx$$ and $$-\frac{1}{2}\int_{0}^{1}\frac{1}{1-x}dx$$ The first integral can be factored to $$\frac{3}{2}\int_{0}^{1}\frac{1}{\left(1-x\right)\left(1+x+x^{2}\right)}dx$$ Then separated $$\frac{1}{2}\int_{0}^{1}\frac{1}{1-x}+\frac{x+2}{1+x+x^{2}}dx$$ The first one will cancel out with $$-\frac{1}{2}\int_{0}^{1}\frac{1}{1-x}dx$$ And I’m left with $$\frac{1}{2}\int_{0}^{1}\frac{x+2}{1+x+x^{2}}dx$$ which is $$\frac{\sqrt{3}\pi}{12}+\frac{\ln\left(3\right)}{4}$$ But the correct answer is $$\frac{\sqrt{3}\pi}{12}+\frac{3\ln\left(3\right)}{4}$$ So I would like to ask if this approach is invalid or if I’m just missing something.

  • How to evaluate $\int^{\infty}_0 \frac{x^{1010}}{(1 + x)^{2022}} dx$?
    by Anonymous on May 18, 2022 at 6:01 am

    How to evaluate the following integral? $$\int^{\infty}_0 \frac{x^{1010}}{(1 + x)^{2022}} dx$$ Here’s my work: $$I = \int_0^\infty \dfrac{x^{1010}}{(1+x)^{2022}} dx = \int_0^\infty \dfrac{1}{x^{1012}(1 + \frac1x)^{2022}}dx$$ Putting $( 1 + 1/x) = t$ $$\implies I =\int^1_\infty -\dfrac{1}{(\frac1{1-t})^{1010}(t)^{2022}}dx =\int_1^\infty \dfrac{1}{(\frac1{1-t})^{1010}(t)^{2022}}dx $$ $$=\int_1^\infty \dfrac{1}{(\frac1{1-t})^{1010}\cdot t^{1010} \cdot (t)^{1012}}dx =\int_1^\infty \dfrac{1}{(\frac t{1-t})^{1010} \cdot (t)^{1012}}dx =\int_1^\infty \dfrac{1}{(\frac 1{1/t-1})^{1010} \cdot (t)^{1012}}dx $$ $$=\int_1^\infty \dfrac{(1/t-1)^{1010}}{ (t)^{1012}}dx =\int_1^\infty \dfrac{(\frac{1-t}{t})^{1010}}{ t^2\cdot (t)^{1010}}dx = \int_1^\infty\dfrac{1}{t^2} \cdot\left( \dfrac{1-t}{t^2}\right)^{1010} dx $$ I don’t know how to continue from here. I also thought that by parts would work but not sure how to apply here.

  • “Multiply everything so far, plug into polynomial” – can these always yield primes?
    by Noah Schweber on May 18, 2022 at 4:19 am

    Say that a factonomial sequence is a (possibly infinite) sequence of natural numbers $x_i$ such that each $x_i$ is prime, and there is some (single variable, integer coefficients, nonconstant) polynomial $p$ such that for all $i>1$ we have $$x_i=p(\prod_{j<i}x_j).$$ Call the polynomial $p$ the shape of the factonomial sequence; each factonomial sequence is determined by its shape and its initial value. The basic idea is that factonomial sequences come out of some elementary proofs that infinitely many primes of a certain form exist. For instance, the usual “multiply everything and add one” argument gives rise to the shape $p_1(u)=u+1$, and the usual “multiply everything twice and add two” proof that there are infinitely many primes $\equiv 3 (\mathsf{mod}$ $4)$ gives rise to the shape $p_2(u)=u^2+2$. The maximal factonomial sequence with shape $p_2$ and starting value $3$ is $$3,11,1091, 1296216011, 2177870960662059587828905091.$$ The next term would be $$10329907495268194677701503661780370732730049826819138974714891651071966324541232011,$$ but that’s not prime. (Amusingly, its smallest prime factor is $41$, but its smallest prime factor which is $3$ mod $4$ is a bit bigger: $76870667$.) Question: is there an infinite factonomial sequence? (This is a question which I’m 99% sure one of my students will ask me in the next couple days!) I’m aware of multiple results of the form “no function of such-and-such type has output consisting entirely of primes,” but I don’t immediately see one which applies here. Unfortunately, terms in factonomial sequences grow so fast that I can’t do much experimenting.

  • Why an LTI system with some zero eigenvalues still stable?
    by fibon on May 18, 2022 at 2:44 am

    The textbook says an LTI system $\dot x=Ax$ is stable if and only if the eigenvalues of $A$ have the strictly negative real part. However, I found a counterexample. If $$A= \begin{bmatrix}-3 & -1 & -1\\ 1 & -0.5 & -0.5\\ 1 & -0.5 & -0.5\end{bmatrix}$$ The state response of this system is convergent, and $x_2 = x_3$. The system is stable even if an eigenvalue of $A$ is $0$. Am I wrong?

  • $( \ X\to G \ , \ \star \ )$ is a group if $(\varphi \star \psi)(a) = \varphi(a) \star \psi(a)$
    by Mystery on May 18, 2022 at 12:12 am

    Let $X$ be a set and $G$ a group with the operation $\star$. Show that the set $$ \mathcal{X} = \Big\{ \varphi : X\to G \mid \text{$\varphi$ is a function} \Big\} $$ is a group with the operation \begin{equation}\label{star} \big(\varphi \star \psi\big)(a) \; = \; \varphi(a) \star \psi(a) \qquad \quad \forall\,a\in G. \end{equation} So associative is pretty easy since $(G,\star)$ is a group: Let $\varphi,\tau,\phi\in\mathcal{X}$. Thus, \begin{align*} ((\varphi\star\tau)\star\phi)(g) &= (\varphi\star\tau)(g)\star\phi(g)\\ &= (\varphi(g)\star\tau(g))\star\phi(g)\\ &=\varphi(g)\star (\tau(g)\star\phi(g))&G \text{ group}\\ &= \varphi(g)\star(\tau\star\phi)(g)\\ &= (\varphi\star(\tau\star\phi))(g) \end{align*} And for identity, let $id:G\to G$ with the map $g\mapsto e$, with $e$ the identity on G, is a function and acts as an identity for $\mathcal{X}$: $$(\varphi\star id)(g) = \varphi(g)\star id(g) = \varphi(g)\star e = \varphi(g) = e\star \varphi(g) = id(g)\star\varphi(g) = (id\star\varphi)(g).$$ But I’m having trouble proving closure and inverses. Since we don’t know if an element is bijective or not, then we can’t construct an inverse. And for closure, how can I show that $\varphi(a) \star \psi(a)$ is still a function?

  • Is a normal domain whose prime ideals are totally ordeded a valuation ring?
    by J.Li on May 17, 2022 at 11:51 pm

    Recall one of the definition of a valuation ring is a domain whose ideals are totally ordered. (Then it will be a normal domain.) But if we restrict to all prime ideals the reverse is not true. The stalk at a non-normal closed point of a one-dimensional scheme is a counter-example. For example, $k[[x^2, x^3]]$ or $\mathbb{Z}_2[\sqrt{-3}]$. I just found that this has been already discussed in Does totally ordered prime ideals in a domain imply valuation ring? All counter-examples so far are non-normal rings, so the question in the title naturally appears.

  • Easier way to solve equation systems of $a+b+c+\cdots{}= 1$, $a^2 + b^2 + c^2+\cdots{}=2$ and so on without having to crunch massive expressions
    by CookedTurtle on May 17, 2022 at 10:39 pm

    I study at below college level. I have been trying to solve certain systems of equations involving $n$ equations of $n$ unknowns. For example, for $2$ unknowns, the problem is \begin{align} a^{\phantom{1}} + b^{\phantom{1}} &= 1 \\ a^2 + b^2 &= 2 \\ a^3 + b^3 &={} ? \end{align} This can be solved with elementary algebra and/or WolframAlpha. You can generalize this to more unknowns: \begin{align} a^{\phantom{1}} + b^{\phantom{1}} + c^{\phantom{1}} &= 1 \\ a^2 + b^2 + c^2 &= 2 \\ a^3 + b^3 + c^3 &= 3 \\ a^4 + b^4 + c^4 &={} ? \end{align} with the same restraint: $n$ unknowns, $n$ equations, in each equation the powers of each variable is the same, and the pattern is clear. Now, I, off of only the first $3$ cases (including the trivial case $a = 1$, find $a^2$) made a conjecture about the result (the missing value of the final expression). Since this is such a random guess at the value, and so many functions could meet just the first few data points, I want to solve the version with $4$ unknowns, just to see whether the conjecture’s true. However, this is very difficult. The expansions quickly get out of hand and not even WolframAlpha can do it. I want a way to at least get the solving process under control. Usually, one’d generate equations and use those to solve for things like $a \cdot b^3$, but here the issue is that just setting up the equations is a task too difficult. Is there a way to elegantly solve the system? I don’t mind trading in time for maybe some more difficult math.

  • Rank 1 operator on an infinite dimensional vector space number of eigenvalues.
    by Sam Kirkiles on May 17, 2022 at 9:07 pm

    Does a rank 1 bounded operator from $\mathscr{K}:L^2([0,1])\to L^2([0,1])$ have at most 1 non-zero eigenvalue? The reason this is not obvious to me is that $L^2([0,1])$ is infinite dimensional. In general, is rank a bound on the number of eigenvalues?

  • Hausdorff dimension of sets with positive Lebesgue measure
    by connected-subgroup on May 17, 2022 at 8:31 pm

    I am reading Hausdorff Dimension, Its Properties, and Its Surprises by Dierk Schleicher. Among the elementary properties of the Hausdorff dimension, the last one is: If $X\subset \Bbb R^n$ has finite positive $d$-dimensional Lebesgue measure, then $\dim_H X = d$. My work. It will be enough to show that $\mathcal H^s(X) = 0$ for all $s > d$, and $\mathcal H^s(X) = \infty$ for all $s < d$. As usual, $$H^s(X) = \lim_{\delta\to 0} H^s_\delta(X)$$ where $$\mathcal H^s_\delta(X) = \inf\left\{\sum_{i=1}^\infty |U_i|^s: \{U_i\} \text{ is a }\delta\text{-cover of }X \right\}$$ Could I please get some hints on how to proceed? I’m unable to relate the Lebesgue measure with coverings of $X$, which would help me find a connection with $\mathcal H^s_\delta(X)$ for given $\delta > 0$. Thanks a lot! This related question doesn’t seem to help.

  • Fibonacci sequences within the Fibonacci sequence recurrence
    by Charlie on May 17, 2022 at 5:18 pm

    I’m trying to perform a runtime analysis of the following simple recursive Fibonacci number algorithm: Fibonacci(n) { if n < 0 return -1 else if n == 0 return 0 else if n == 1 return 1 else return Fibonacci(n – 1) + Fibonacci(n – 2) } Define the “$T(i)$ count” for $n$ to be the number of times $T(i)$ is evaluated by the algorithm when evaluating $T(n)$. In breaking down the formulas for smaller values of $n$, I found the following to be true: $n$ $T(1)$ count $T(0)$ count 0 0 1 1 1 0 2 1 1 3 2 1 4 3 2 5 5 3 6 8 5 7 13 8 Note: $T(1)$ is the run time of the above algorithm when $n = 1$, and $T(0)$ is the run time of the algorithm when $n = 0$. Specifically, $T(1), T(0)\in \Theta(1)$. The trend in the counts of $T(1)$s and $T(0)$s seems like they follow the Fibonacci sequence themselves: The number of $T(1)$s making up the run time of the $n^{\text{th}}$ Fibonacci number is the $n^{\text{th}}$ Fibonacci number itself, $F_n$. The number of $T(0)$s making up the run time of the $n^{\text{th}}$ Fibonacci number is the $(n – 1)^{\text{th}}$ Fibonacci number, $F_{n-1}$, except for $F_0$ which has one $T(0)$ in its run time (I noticed that for $n = 0$ the number of $T(0)$s is the same as the number of $T(0)$s when $n = 2$ minus the number of $T(0)$s when $n = 1$, i.e., $T(0)$ count = $T(2)$ count $ – \text{ } T(1)$ count). This would mean that $$T(n) = F_n \times T(1) + F_{n – 1} \times T(0) \in \Theta\left( F_n \right) \cup \Theta\left( F_{n-1} \right) = \Theta\left( F_n \right)$$ for $n \geq 1.$ I guess my only question regarding the work above is to ask someone if it’s accurate. I’m very excited by this, so I’m hoping it’s correct. Thank you for your time, attention, and patience (I’m not sure if this is entirely in compliance with the rules, but I’m so amazed by this that I honestly just wanted to share it and determine if I’m right).

  • The Maclaurin series of $1-(1-\frac{x^2}{2} + \frac{x^4}{24})^{2/3}$ has all coefficients positive
    by orangeskid on May 17, 2022 at 1:12 pm

    It was shown in a previous post that the Maclaurin series of $1 – \cos^{2/3} x$ has positive coefficients. There @Dr. Wolfgang Hintze: has noticed that the truncation $1- \frac{x^2}{2} + \frac{x^4}{24}$ can be substituted for $\cos x$ ( seems to be true for all the truncations). The proof is escaping me. Thank you for your attention! $\bf{Added:}$ Thomas Laffey in this paper directs to a proof of the fact that if $a_1$, $\ldots$, $a_n\ge 0$ then $\alpha = \frac{1}{n}$ makes the following series positive: $$1- (\prod_{i=1}^n (1- a_i x))^{\alpha}$$ Numerical testing suggests that $\alpha = \frac{\sum a_i^2}{(\sum a_i)^2} \ge \frac{1}{n}$ works as well ( see the case $n=2$ tested here). So in our case, instead of $\alpha = \frac{1}{2}$ we can take $\alpha = \frac{2}{3}$. Clearly, this would then be the optimal value. This would be a test case for $n=2$. The result for $\cos x$ used the special properties of the function ( solution of a certain differential equation of second order). Maybe $1- x/2 + x^2/24$ is as general as any quadratic with two positive (distinct) roots.

  • $\int _0^1f^2\left(x\right)dx-2\int _0^{\sqrt{3}-1}\:\left(x+1\right)f\left(x\right)dx\:+1=0$
    by shangq_tou on May 17, 2022 at 10:33 am

    Let $ f $ be an increasingly continuous function over $[0,1]$ such that: $$\int _0^1(f\left(x\right))^2dx-2\int _0^{\sqrt{3}-1}\:\left(x+1\right)f\left(x\right)dx\:+1=0$$ Find all functions $f$ with these properties. Attempt.: First, the function $f$ is integrable therefore the computations with integrals work there. Then I did the following: $$\int _0^1(f\left(x\right))^2dx-2\int _0^{\sqrt{3}-1}\:\left(x+1\right)f\left(x\right)dx\:+1=$$ $$\int _0^{\sqrt{3}-1}(f\left(x\right))^2dx+\int _{\sqrt{3}-1}^1(f\left(x\right))^2dx-2\int _0^{\sqrt{3}-1}\:\left(x+1\right)f\left(x\right)dx\:+1=$$ $$\int _0^{\sqrt{3}-1}\left(f\left(x\right)-\left(x+1\right)\right)^2dx+\int _{\sqrt{3}-1}^1(f\left(x\right))^2dx-\int _0^{\sqrt{3}-1}\left(x+1\right)^2dx+1=0$$ Since $$-\int _0^{\sqrt{3}-1}\left(x+1\right)^2dx=-\frac{6\sqrt{3}-10}{3}-3+\sqrt{3}$$ Then $$\int _0^{\sqrt{3}-1}\left(f\left(x\right)dx-\left(x+1\right)\right)^2dx+\int _{\sqrt{3}-1}^1(f\left(x\right))^2dx-\frac{6\sqrt{3}-10}{3}-2+\sqrt{3}=0$$ $$\int _0^{\sqrt{3}-1}\left(f\left(x\right)dx-\left(x+1\right)\right)^2dx+\int _{\sqrt{3}-1}^1(f\left(x\right))^2dx+\frac{-3\sqrt{3}+4}{3}=0$$ How should I continue from there? I got stuck there. I am not sure this is going to lead me somewhere. Maybe I should have started differently. Has somebody an idea what the functions look like?

  • Example that M* is not reflexive
    by Jian on May 17, 2022 at 3:12 am

    Let $R$ be a noetherian ring. Set $(-)^\ast={\rm Hom}_R(-,R)$. For each $R$-module $N$, let $\pi_N:N\rightarrow N^{\ast\ast}$ be the map which maps $n\in N$ to $(f\mapsto f(n))$. $N$ is called reflexive if $\pi_N$ is isomorphism. Question: Does there exist a finitely generated $R$-module $M$ such that $M^\ast$ is not reflexive? I guess the answer is yes. But I can’t find any example. For each $R$-module $N$, we can check directly that the composition $N^\ast\xrightarrow{\pi_{N^\ast}}N^{\ast\ast\ast}\xrightarrow{(\pi_N)^\ast}N^\ast$ is identity. In particular, $\pi_{N^\ast}$ is always invective. I searched the internet. It is proved in Yoshino’s paper that if $R$ is Gorenstein in depth one, then $M^\ast$ is reflexive for each finitely generated $R$-module $M$; see Lemma 4.4 of HOMOTOPY CATEGORIES OF UNBOUNDED COMPLEXES OF PROJECTIVE MODULES. Thank you in advance.

  • Is there any function such that the limit of its derivative divided by its value to the nth power diverges?
    by Toby Saunders on May 16, 2022 at 11:04 pm

    Recently, I have become intrigued with this functional:$$D_n=\lim \limits _{x\to \infty}\frac{f'(x)}{[f(x)]^n}.$$In particular, provided that the function is both differentiable and increasing in magnitude for all $x$, for which functions does $D_n$ diverge to infinity? First, I have considered $n=1$. This has obvious solutions. For example, $e^{e^x}$, which has derivative $e^x\cdot e^{e^x}$, makes $D_1$ diverge to infinity, but not $D_2$. Is there any function which satisfies the above conditions and makes $D_2$ diverge? What about a function which satisfies the above conditions and makes $D_n$ diverge for all values of $n$? As a reminder, here are the conditions for the functions: You must be able to choose an $x_0$ such that the function is differentiable for all $x>x_0$.The absolute value of the function must be increasing for all $x$ on which it is defined. The function must be both defined on real numbers and real-valued.

  • Criteria for $3 \times 3$ matrix to positive definite
    by XXX1010 on May 16, 2022 at 3:36 pm

    Here it is said that a $2\times 2$ matrix $A$ is positive definite if and only if $tr(A) >0$ and $det(A)>0$. This will not work if $A$ is $3\times 3$. But is there any way to enforce the positive definiteness of the matrix $A$ via the trace and determinant of $A$, if $A$ is of size $3\times 3$?

  • Can the real numbers be equally split into two sets of same measure?
    by emacs drives me nuts on May 16, 2022 at 11:58 am

    The rational numbers $\Bbb Q$ are dense in $\Bbb R$, but they are still a set of measure zero, i.e. $$\begin{align} \mu(\Bbb Q \cap [a,b]) &= 0 \\ \mu((\Bbb R\!\setminus\! \Bbb Q) \cap [a,b]) &= b-a \\ \tag 1 \end{align}$$ for any finite interval $[a,b]$. Is it possible to have more equally distributed sets, so that neither of them is a set of measure 0, and this holds on any interval similar to (1)? More specifically, are there decompositions $A, B\subset \Bbb R$ and a measure $\mu$ such that all of the following conditions hold? $$\begin{align} A\cap B = \emptyset\quad&\text{ and }\quad A\cup B = \Bbb R \\ \mu ([a,b]) &= b-a \\ \mu (A\cap [a,b]) &= (b-a) / 2 \\ \mu (B\cap [a,b]) &= (b-a) / 2 \\\tag 2 \end{align}$$ for any finite interval $[a,b]\subset\Bbb R$? The first line just states that $A,B$ is a decomposition of $\Bbb R$, the second line is a common normalizing condition for $\mu$. Or, at your option, that $$\begin{align} \mu(A\cap[a,b]) &= (b-a)\kappa \qquad\text{ for some } 0<\kappa<1 \\ \mu(B\cap[a,b]) &= (b-a)(1-\kappa) \end{align}$$ again for any finite interval $[a,b]$. And it might even be in order if $\kappa=\kappa(a,b)$ depends on $a$ and $b$ provided $0<\kappa(a,b)<1$ for finite intervals. My intuition says that there is no such decomposition, but maybe I am wrong.

  • Find the number of elements in $\{0,1\}^n$ with no more than three $1$’s or three $0$’s in a row
    by Lucy Manzoli on May 16, 2022 at 5:53 am

    I’m trying to find a general formula for the number of elements $s_n$ in $\{0,1\}^n$ with no more than three $1$’s or three $0$’s in a row, where $n\geq1$. I calculated $s_n$ for small values of $n$ but could not really come up with a formula to prove by induction. I also approached the problem combinatorially by considering two groups, one of three $0$’s and one of three $1$’s, and arranging them together with other arbitrary elements totalling $n$ elements in all, but since we want to find the number of elements with no more than three $0$’s or $1$’s in a row, and not exactly three, this approach does not work either

  • Asymptotic for $\sum_{k=1}^n k^n$
    by Vladimir Reshetnikov on May 15, 2022 at 9:38 pm

    Consider the OEIS sequence A031971, which is defined as: $$a_n=\sum\limits_{k=1}^n k^n\quad\color{gray}{(1,\,5,\,36,\,354,\,4425,\,67171,\,1200304,\,.\!.\!.\!)}\tag{1}$$ I’m interested in the asymptotic behavior of $a_n$ for $n\to\infty$. Empirically, it appears that $$a_n\stackrel{\color{gray}?}\sim\frac{e}{e-1}\,n^n\cdot\left(1-\frac{e+1}{2\,(e-1)^2}\,n^{-1}+c\,n^{-2}+\mathcal O\!\left(n^{-3}\right)\right),\tag{2}$$ where $c\approx0.6310116…$ (I haven’t found a plausible closed form it). The leading term $\frac{e}{e-1}\,n^n$ is given in the OEIS. How can we prove this formula and find higher terms in it?

  • Maximum number of linearly independent non-commuting matrices
    by ApprenticeTheSecond on May 15, 2022 at 5:34 pm

    Let $S$ be a set of non-commuting, linearly independent $d \times d$ positive definite matrices (i.e., for any $A \neq B$, $[A, B] = AB – BA \neq 0$). Is there any upper bound for the number of elements the set $S$ contains? (It is clear that it must be equal to or smaller than $d^2$. Any reference to a book/article is appreciated). By positive definite matrix I mean matrices in the form $A = M^{\dagger}M$ where $M$ is a $d \times d$ invertible matrix over the field $\mathbb{C}$. Linear independence, however, is (necessarily) considered over $\mathbb{R}$, e.g., $A \neq a_1 B + a_2 C$ for any $a_1, a_2 \in \mathbb{R}$ and $A,B,C \in S$.

  • Smallest eigenvalue of a nearest neighbor matrix in $2$ dimensions.
    by krypt24 on May 15, 2022 at 1:53 pm

    Consider a 2D square lattice with $n \times n$ lattice sites. A matrix $M_n$ of size $n^2 \times n^2$ is constructed by setting $M_{ij} = u$ (where $0 \leq u \leq 1$) if sites $i$ and $j$ are nearest neighbors, and all the diagonal elements $M_{ii} = 1$. For example, with $2 \times 2$ lattice sites, we have $$M_2 = \begin{pmatrix} 1 & u & u & 0 \\ u & 1 & 0 & u \\ u & 0 & 1 & 0 \\ 0 & u & u & 1 \end{pmatrix}$$ (i.e. lattice 1 has nearest neighbors 2,3 and lattice 2 has nearest neighbors 1,4 etc…). The smallest eigenvalue of $M_2$ is $\lambda_\text{min}^{(2)} = 1-2u$, for $M_3$ it is $\lambda_\text{min}^{(3)} = 1-2\sqrt{2}u$, for $M_4$ it is $\lambda_\text{min}^{(4)} = 1-(1+\sqrt{5})u$. Numerically I seem to get $\lambda_{\text{min}}^{(N)} \to 1-4u$ as $N \to \infty$, but I am not sure how to prove it. Using the Gershgorin circle theorem, I am able to get the bound $\lambda_{\text{min}}^{(N)} \geq 1-4u$ so it seems like the matrix here saturates the lower bound. Is there a way to prove this?

  • Which rings are rings of continuous functions?
    by Noah Schweber on May 14, 2022 at 6:39 pm

    This is a question for which I’ve found a number of “near-miss” results online, which may actually be answers but whose direct relevance I haven’t been able to see. Say that a ring $A$ is spatial iff there is some topological space $\mathcal{X}$ such that $A\cong C(\mathcal{X})$, where $C(\mathcal{X})$ is the ring of continuous functions $\mathcal{X}\rightarrow\mathbb{R}$. Is there a purely algebraic characterization of spatiality? I’ve been told that Gelfand representations are relevant here, but I don’t immediately see how they answer the question; maybe I’m missing something, though. (Note however that I do mean to ask about rings, rather than more intricate structures like Banach algebras. Also note that I’m not assuming any tameness properties on the spaces which are candidates for witnessing spatiality.) EDIT: “purely algebraic characterization” is of course some serious weasel-wordery. Here’s one way to make that precise (and so make possible a rigorous negative answer): Is there an $\mathcal{L}^2_{\infty,\infty}$-sentence characterizing the spatial rings? Here $\mathcal{L}^2_{\infty,\infty}$ is the fully-infinitary version of second-order logic: we allow arbitrary-cardinality Boolean combinations and quantifications (over both first- and second-order objects). Of course, any specific $\mathcal{L}^2_{\infty,\infty}$-sentence can only “reach up” to a particular cardinal, so this isn’t actually as overkill as it may appear.

  • $\sum_{k=0}^\infty\frac{1}{k+1}\binom{3k+1}{k}\left(\frac{1}{2}\right)^{3k+2}$ converges to $\frac{3-\sqrt{5}}{2}$?
    by C.C on May 14, 2022 at 9:18 am

    I stumble upon the expression $$ \sum_{k=0}^\infty \frac{1}{k+1} \binom{3k+1}{k} \left( \frac{1}{2} \right)^{3k+2} $$ and it seems to converge to $$ \frac{3-\sqrt{5}}{2} $$ Do they equate ? How to prove that ? Not sure if it’s helpful : $\frac{1}{k+1}\binom{3k+1}{k} , \; k\ge0$ is the OEIS sequence $A006013$. I only have fundamental knowledge in combinatorics so I could only check it numerically that the convergence seems to be true . However I wouldn’t mind learning new theories .

  • Criteria for Hausdorff
    by user1057350 on May 14, 2022 at 6:01 am

    Let $f:X\to Y$, $Y$ is Hausdorff, and $f$ is continuous. How to prove that $f$ is injective if $X$ is Hausdorff? It is easy enough to show that $f$ injective implies $X$ Hausdorff, and I have been able to find examples where, if $f$ is not injective, $X$ is not Hausdorff. However, is it possible to prove the above statement in general? I cannot seem to work it out from the definitions.

  • Can you prove that these two series are equal?
    by irbag on May 13, 2022 at 2:29 pm

    Let for all $x>0$ $$ f(x)=\sum_{n=0}^{+\infty}\frac{1}{x(x+1)\dots(x+n)}$$ Can you prove that for all $x>0$ $$f(x)= e \sum_{n=0}^{+\infty}\frac{(-1)^n}{(x+n)n!} $$ this is a question in a test for undergraduate students. I checked that the series that defines $f$ converges. Moreover i proved that it uniformly converges in every interval of the form $[a,+\infty[$ with $a>0$. One of my attempts to solve the exercise was to trying to differentiate both series and see if the expression of the derivatives was easier to handle. but I didn’t get anywhere. Any suggestions?

  • Prove that points E, H, and F are collinear
    by ZNatox on May 12, 2022 at 11:40 pm

    Let $\triangle ABC$ be a triangle. Let $M$ be the midpoint of side $[BC]$. $H,$ and $I$ are respectively the orthocenter and incenter of $\triangle ABC$. Let $D = (MH)\cap(AI)$. $E$ and $F$ are the feet of perpendiculars from $D$ to $(AB)$ and $(AC)$, respectively. Prove that $E, F$ and $H$ are collinear. Here is the source of the problem (in french) here I have solved it using barycentric coordinates. As a matter of fact, one can get that lines $(MH)$ and $(AI)$ have equations, respectively: $\left[\displaystyle \frac{c^2-b^2} {S_{BC}}:\frac1{S_A}:-\frac1{S_A}\right]$ and $\left[\displaystyle 0:-c:b\right]$ $($Here, $S_A=\displaystyle \frac{b^2+c^2-a^2}2$, define cyclically $S_B$ and $S_C$, it’s Conway’s Notation) Intersecting these lines gives un-normalized: $D\left(\displaystyle\frac{S_{BC}}{S_A(b+c)}:b:c\right)$, which in turn gives: $F\left(\displaystyle\frac{S_C}{b}+\frac{S_{BC}}{S_A(b+c)}:0:\frac{S_A}{b}+c\right)$ and: $E\left(\displaystyle\frac{S_B}{c}+\frac{S_{BC}}{S_A(b+c)}:\frac{S_A}{c}+b:0\right)$. Now, clearly the deteminant formed by $E,F$ and $H$ is null. The conclusion follows. What I’m asking for is a synthetic solution to this problem. I have tried to come up with one, but couldn’t. The main thing I noticed is the line connecting the two touch-points of the incircle with sides $(AC)$ and $(AB)$ is parallel to line $(EF)$, so maybe what we’re looking for is a convenient homethety.

  • Connection between Laplace Transforms and Pythagorean triples
    by Trent Hudson on May 12, 2022 at 10:00 pm

    I was studying for a recent university exam when I realized that there appears to be a connection between the Laplace transforms of certain functions and Pythagorean triples. Mainly the Laplace of $t\sin(wt) = \frac{2sw}{(s^2 + w^2)^2}$ and the Laplace of $t\cos(wt) = \frac{s^2 – w^2}{(s^2+w^2)^2}$. These numbers $(2sw, s^2-w^2, s^2+w^2)$ match the definition of a Pythagorean triple and I was wondering if anyone could explain why. I’ve spent a little bit of time working on it and couldn’t figure it out. Just a question for curiosity, not that deep. Thank you in advance.

  • Finding where in Ramsey’s theorem one uses the Axiom of choice
    by Math_Images_Only on May 12, 2022 at 3:09 pm

    Ramsey’s Theorem for infinite graphs requires some choice but when looking at the proof it is not evident how choice is exactly used. Sketch of the proof: Given $c:[\omega]^2\rightarrow 2$ a coloring we construct a homogeneous set as follows. Inductively we set $a_0=0$ and $S_0=\omega$ then one has that $S_0$ is partitioned into $H^0_0\{a\in S_0\setminus\{a_0\}:c(\{a,a_0\})=0\}$ and $H^0_1\{a\in S_0\setminus\{a_0\}:c(\{a,a_0\})=1\}$ we set $S_1$ to be $H^0_0$ if it’s infinite and $H^0_1$ if it’s not. And we set $a_1$ to be the minimum of $S_1$. We define analogously $a_{n+1}$ and $S_{n+1}$ from $S_n$ and $a_n$. Given then the set $\{a_n:n\in \omega\}$ we have that for a given $n$ then for all $m>n$ $c(\{a_n,a_m\})$ is constant of color $i_n$. This defines a function $\omega \rightarrow 2$ so we take the preimage of $0$ if it’s infinite other we take the preimage of $1$. I know I am wrong but none of the steps seem arbitrary since both $\omega$ and $2$ are well ordered I can always choose a least element. So where exactly is the axiom of choice needed in this proof? Since the graph version is equivalent to the fact that $\aleph_0$ has the tree property I am assuming it has to do with the existence of the “branch” $S_n$.

  • Can we reach every number of the form $8k+7$ with those $4$ functions?
    by Erfan Tavakoli on May 12, 2022 at 10:21 am

    Suppose you start with number $1$ and at each step you can apply one of the functions $$\{2x+1, 3x, 3x+2, 3x+7\}$$ to it. Can you reach at every number of $8k+7$ form? P.s What I already know : the same is not true for the $4k+3$ instead of $8k+7$, there is a counter example for that ( some number bigger than a thousand which I don’t remember exactly) I could prove that there is a number $l$ such that all $2^lk+2^l-1$ are reachable There is computer aided proof to show all numbers like $128k+127$ can be reached (from comment) I was playing with affine maps and their properties which I encountered this example and property which I verified by computer but can’t prove it.

  • Finding the value of $\lim_{a\to \infty}\int_0^1 a^x x^a \,dx$
    by Koro on May 12, 2022 at 8:09 am

    I’m trying to find the value of $$\lim_{a\to \infty}\int_0^1 a^x x^a \,dx$$ My attempt: Let $\epsilon >0$ be given. $ x\mapsto a^{x}$ is continuous at $ 1$ so there is a $d_a\in ( 0,1)$ such that $|a^{x} -a|< \epsilon $ for all $ x\in [d_a,1]$. WLOG, let $d_a<1/2$. $ |\int _{0}^{1} x^{a} a^{x} \ dx-\ \int _{0}^{1} \ ax^{a} |=|\int _{0}^{1}\left( a^{x} -a\right) x^{a} \ dx|\leq |\int _{0}^{d}\left( a^{x} -a\right) x^{a} \ dx|+|\int _{d}^{1}\left( a^{x} -a\right) x^{a} \ dx|$ \begin{align*} \left|\int _{0}^{1} x^{a} a^{x} \ dx-\ \int _{0}^{1} \ ax^{a} \right| & \leq \left|\int _{0}^{d_a}\left( a^{x} -a\right) x^{a} \ dx\right|+\left|\int _{d_a}^{1}\left( a^{x} -a\right) x^{a} \ dx\right|\\ & \leq \int _{0}^{d_a}\left( a -a^{x}\right) x^{a} \ dx+\epsilon \left|\int _{d_a}^{1} x^{a} \ dx\right|\\ & \leq \int _{0}^{d_a}\left( a -a^{x}\right) x^{a} \ dx+\epsilon \\ & \leq \int _{0}^{1/2} a(1/2)^{a} \ dx-a\int _{0}^{d_a} x^{a} dx+\epsilon \\ & \leq a(1/2)^{a} +\epsilon \end{align*} $0\leq \lim _{a\rightarrow \infty }\inf |\int _{0}^{1} x^{a} a^{x} \ dx-\ \int _{0}^{1} \ ax^{a} |\leq \lim _{a\rightarrow \infty }\sup |\int _{0}^{1} x^{a} a^{x} \ dx-\ \int _{0}^{1} \ ax^{a} |\leq \epsilon $ Since this is true for every $\epsilon >0,$it follows that $ \lim _{a\rightarrow \infty }\int _{0}^{1} x^{a} a^{x} \ dx-\ \int _{0}^{1} \ ax^{a} =0$. Is my proof correct? Thanks.