Mathematics Stack Exchange News Feeds

  • Smallest $\mathbb R^n$ into which $SO(3)$ can be topologically embedded?
    by WillG on October 22, 2021 at 3:23 am

    I think there are theorems in differential geometry guaranteeing that $SO(3)$ can be embedded into $\mathbb R^n$ for some $n$. But do we know what particular $n$ this is for $SO(3)$, and do we know of a particular embedding? This would be nice to know if $n$ happens to be small enough for us to “think about” the global topology of $SO(3)$ in a somewhat geometrical way.

  • Without superior math, can we evaluate this limit?
    by namphamduc on October 22, 2021 at 1:21 am

    We all knew, with $$\lim_{x\to 0}\frac{\sin x – x}{x(1-\cos x)}$$ we can use L’Hôpital’s rule or Taylor series to eliminate undefined form. But without all tools, by only using high school knowledge, how can we evaluate this limit? It seems difficult to transform numerator, any idea? Thank you!

  • Evaluate $\int_0^t e^{-\lambda s} \,\text{erf}(\ln(t-s))\,\text{d}s$?
    by MisterBlobfish on October 21, 2021 at 10:07 am

    I’m trying to efficiently graph the function $I(t)$ for $t>0$ where $$I(t) :=\int_0^t e^{-\lambda s} \,\text{erf}(\ln(t-s))\,\text{d}s,\qquad \lambda>0$$ but its evaluation is beyond my powers of integration. I can use a numerical integration package to plot it, but it is pretty slow. Happy to compute a decent approximation if it’s available. Mathematica was not able to provide an answer, and I’ve tried some integral substitutions like $u=\ln(t-s)$, as well as looked through the fantastic table of erf integrals, but no solution seems to be clear. My current strategy is to approximate the exponential by a quartic polynomial $$\exp(-\lambda x)\approx \sum_{k=0}^4 c_k (\lambda x)^k$$ in some region $x\in[0,R]$, and set to zero when $x>R$. Then I can compute $\int s^k \text{erf}(\ln(t-s))\,\text{d}s$, but there are a painful number of terms when using a quartic – are there any better suggestions? Perhaps some other approximation of the $\exp$, or nice series expansion of the function? Thanks! Edit: Just to be clear, imagine I am given ten thousand different values of $\lambda$, and I want to graph $I(t)$ for each of them in the region $t\in [0,100]$. How can I do this efficiently?

  • I’m trying to find where a “3D logarithmic spiral” converges.
    by timeslidr on October 21, 2021 at 2:30 am

    Background and problem It’s in quotes because I don’t know what I’m talking about. I’m more of an artist who likes to dabble in math so bare with me. I know the title is a little confusing so I have made a pretty animation to illustrate what I mean. I assume this is some type of discrete logarithmic spiral. It always starts the origin with no translations, rotations or scale applied. You’ll notice in the video, the 2nd cube has 3 axes coming out of it. It’s transform is driving the rest of the cubes as seen in this clip. I can translate, rotate, and scale and as long as the translation and rotation are not 0, and the scale is between 0 and 1 it’ll make a spiral. Anything else won’t produce a spiral as far as I’m aware. I’ve searched Google for about 5 hours today and haven’t found anything so I either don’t know the proper terms, it’s never been done before which I doubt, or it’s impossible which I don’t know but highly doubt. I’m trying to find the $x, y,$ and $z$ for where this would converge in non-polar/spherical form so I can plug them into my software. Why I’m here Some hindrances I’m having are that I’ve never taken anything on spherical coordinates which I assume would be useful (or even polar coordinates), I don’t have a background in linear algebra at all, and I’ve been out of school about 8 years at this point so I’m very rusty. I’ve asked this question on Reddit and was given an answer but I don’t understand it. They suggested, If your initial vector is v and your linear transformation is T, then the point it converges to is $(1+T+T^2 +T^3…)v=\dfrac v{1-T}.$ For the purpose of computation, you need to write T as a matrix, v as a column vector. Can come one break that down a bit more? Is $v$ my 2nd cube with the axes poking out? I don’t understand what $T$ means at all. Is it even right? I’m ok with not being spoon-fed the answer but I need a little bit more than this. I’m not even sure which branch of math I need. I know it’s some sort of geometric series because it’s decreasing and I can see it converging to some 3d point in space. That reminds me of calculus. I’ve taken Calc 2 at University but that was a long time ago and we never talked about 3d space. Any help would be useful. Motivation And in case you’re wondering why I’m interested in this. I’ve been interested in these scaling looping animations lately and I wanted to understand the math behind them. I found some videos by the people who make them (one of the guys mentioned in this video did an animation for Justin Bieber so they’re legit) and even they don’t understand the math. They’re using their artistic skills to get close but that can be time-consuming. I want to know the math behind it. And I refuse to believe the math hasn’t been worked out already. The point where I’m trying to find is where if you scale the whole spiral from, it has this unique property of looping perfectly. And if it’s scaled at an exponential rate $\bigg(\dfrac{1}{s}\bigg)^{\frac{n(f – 1)}{a}}$ where $s$ is the scaling factor, $n$ is the number of cycles you want, $f$ is the current frame and $a$ is the length of the animation in frames, then it ends up looking like it’s scaling linear. In this example, on the left you get the illusion of it scaling forever but on the right you can see what’s really happening. If it doesn’t scale at an exponential rate, then it starts slow and gets faster which also breaks the illusion. It’s a 1m square that gets shrunk in half each time $(1 + 1/2 + 1/4 + …)$ so I know the limit is $2$. I’m then scaling the whole spiral from $(2, 0, 0)$ by a factor of $2$ over $50$ frames. It only scales perfectly from $(2, 0, 0)$. This is what happens if I don’t use the right point. It slowly drifts away and snaps back each cycle breaking the illusion. So that’s why it’s important for me for find this location. Updates A user on Reddit found a solution that works in 2D and I can confirm that it works. Their solution is as follows: Let $x’$ and $y’$ be the point where it converges. Then it converges at: $x’ = Ax – By$ $y’ = Bx + Ay$ where $A$ is the function $\frac{1 – k cos θ}{1 + k^2 – 2k cos θ}$ $B$ is $\frac{k sin θ}{1 + k^2 – 2k cos θ}$ and where $k$ is the scaling factor and $θ$ is the angle. Update 2 I think I’m getting closer. I’ve been told that: In two dimensions, the transformation T is given by $T = k R(θ) = k [ cos θ , -sin θ ; sin θ , cos θ ]$, and therefore $T^n = k^n [ cos(nθ) , -sin(nθ) ; sin(nθ) , cos(nθ) ]$. However, this cannot be easily generalized to three dimensions. You would need to explain more precisely what you mean by a rotation in three dimensions. Do you want a single rotation with respect to a given axis? If you want to apply three rotations with respect to the three coordinate axes, in what order do you apply them? How would I know what order to apply them since I can edit any one of them at any time?

  • What is the area of $y^2=\sqrt x-x$ (Guitar Pick)
    by Aaron Night on October 21, 2021 at 1:34 am

    I made a typo while experimenting on Desmos and typed $y^2=\sqrt x-x$. It drew a shape, one that I’ve never seen before: With my very limited knowledge of calculus, I know the area would be equal to: $$2\int_0^1 \sqrt{\sqrt x – x} \,dx$$ However, I have no clue how to evaluate this integral. Using Desmos, I can get a decimal approximation (it’s about 0.785), and Wolfram Alpha can give me the final result ($\pi/4$). No site I can think of has the solution and steps to solve it, so I figured I’d ask it here. How would you evaluate this integral?

  • compute the integral $\int_0^1 \int_0^1 \int_0^1 \frac{1}{(1+x^2+y^2+z^2)^2} dxdydz$ [duplicate]
    by user3472 on October 20, 2021 at 10:53 pm

    Determine, with justification, the value of the integral $\int_0^1 \int_0^1 \int_0^1 \frac{1}{(1+x^2+y^2+z^2)^2} dxdydz$. I tried converting this integral to cylindrical coordinates with $r = \sqrt{x^2 + y^2}$ ranging from $0$ to $\sqrt{2}$, $0\leq \theta \leq \pi/2, 0\leq z \leq 1,$ where $\theta $ is such that $x= r\cos\theta, y = r\sin\theta.$ However, this seems to lead to an incorrect result. Which bounds have I gotten wrong? Also, it seems that the integral over the unit cube equals twice the integral over the region defined by $0\leq z\leq 1, 0\leq x\leq 1, 0\leq y\leq x,$ but I’m not sure why. The result should be $\frac{\pi^2}{32},$ which is basically what WolframAlpha outputs. Using spherical coordinates seems to make the integration more complicated due to the integration factor.

  • Is there a test for convexity?
    by Davi Barreira on October 20, 2021 at 6:35 pm

    This is a very heterodox question. But here is the context. I’m programming a computational package, and the user may write/define a cost function freely, e.g. $$ cost(x,y) = e^{|x-y|} (x-y)^2. $$ Now, the algorithm programmed only works if the cost function is convex. Here is where my question comes in. Would there be some kind of test to verify if the function is indeed convex? Mathematically, we can try to manipulate the function in order to verify whether it satisfies the convexity definition, but this scenario does not allow for such approaches. I was thinking for something like “sample some points, calculate the function and verify if the mid point is above the linear interpolation”. How many points would be necessary to correctly guess that the function is convex with a certain probability? Any references on this kind of odd question (the probability that a function is convex)?

  • Why is my process of differentiation (trigonometric substitution) not working?
    by tryingtobeastoic on October 20, 2021 at 11:54 am

    Prologue (you can skip straight to the “Problem” section (bolded) if you want): First, to show you what way (let’s call it trigonometric substitution method) I’m talking about and to show that this way works, I’ll describe the tenets and then do a math using that way: Basic tenets of trigonometric substitution method: It is applicable when we are differentiating inverse trigonometric functions. $x$ should be substituted with a trig ratio that can hold all the possible values of $x$ and that will make differentiation easier. For example, in $\cos^{-1}(\sqrt{\frac{1+x}{2}})$, $-1\leq x\leq1$, so it can be substituted with $\cos\theta$ or $\sin\theta$; substituting with $\sin\theta$ doesn’t make our life easier, so we have to substitute with $\cos\theta$. Similarly, in $\tan^{-1}\left(\sqrt{\frac{1-x}{1+x}}\right)$($-1<x\leq1$) and $\sin^{-1}\left(\frac{1-x^{2}}{1+x^{2}}\right)$$(x\in(\infty,-\infty))$, $x$ has to be substituted with $\cos\theta$ & $\tan\theta$ respectively. All of the maths can also be done exclusively using the chain rule. However, the maths might get tedious in that way. Example Differentiate with respect to $x$: $\tan^{-1}\frac{4x}{\sqrt{1-4x^2}}.$ Differentiation using trigonometric substitution: Let, $y=\tan^{-1}\frac{4x}{\sqrt{1-4x^2}}$ and $2x=\cos\theta\implies\theta=\cos^{-1}2x\ [\text{Assuming $\theta$ is within the principal range of $\arccos$}]$ Now, $$y=\tan^{-1}\frac{4x}{\sqrt{1-4x^2}}$$ $$y=\tan^{-1}\frac{2\cos\theta}{\sqrt{1-\cos^2\theta}}$$ $$y=\tan^{-1}2\cot\theta$$ $$\frac{dy}{dx}=\frac{d}{d(2\cot\theta)}(\tan^{-1}2\cot\theta).\frac{d}{d(\cot\theta)}(2\cot\theta).\frac{d}{d\theta}(\cot\theta).\frac{d}{dx}\theta$$ $$…$$ $$\frac{dy}{dx}=\frac{4}{(12x^2+1)(\sqrt{1-4x^2)}}$$ This is the correct answer. We could’ve taken $2x=\sin\theta$ as well and the answer would’ve been the same. We could’ve done the math exclusively using the chain rule as well. Problem Differentiate with respect to $x$: $\sin^{-1}(2x\sqrt{1-x^2}).$ Attempt 1 Let $y=\sin^{-1}(2x\sqrt{1-x^2})$ and $x=\sin\theta\implies\theta=\sin^{-1}x\ [\text{assuming $\theta$ is within the principal range of $\arcsin$}]$ $$y=\sin^{-1}(2x\sqrt{1-x^2})$$ $$y=\sin^{-1}(2\sin\theta\cos\theta)$$ $$y=\sin^{-1}(\sin2\theta)$$ $$y=2\theta\tag{1}$$ $$y=2\sin^{-1}x$$ $$\frac{dy}{dx}=2\frac{1}{\sqrt{1-x^2}}$$ Attempt 2 Let $y=\sin^{-1}(2x\sqrt{1-x^2})$ and $x=\cos\theta\implies\theta=\cos^{-1}x\ [\text{assuming $\theta$ is within the principal range of $\arccos$}]$ $$y=\sin^{-1}(2x\sqrt{1-x^2})$$ $$y=\sin^{-1}(2\sin\theta\cos\theta)$$ $$y=\sin^{-1}(\sin2\theta)$$ $$y=2\theta\tag{2}$$ $$y=2\cos^{-1}x$$ $$\frac{dy}{dx}=-2\frac{1}{\sqrt{1-x^2}}$$ Interestingly enough, we get two different answers using $x=\cos\theta$ & $x=\sin\theta$, which shouldn’t have been the case. More importantly, both of the answers are wrong. Questions: Why am I not able to differentiate correctly using the trigonometric substitution method? In the graph of the correct derivative and the incorrect derivative found using $x=\sin\theta$, there is an overlap between the two from $x=-0.707$ and $x=0.707$. What is the significance of the number $0.707$, and why is the overlap happening? In the graph of the correct derivative and the incorrect derivative found using $x=\cos\theta$, there is an overlap between the two from $x=-0.707$ to $x=-1$ in the negative y-axis and from $x=0.707$ to $x=1$ in the positive y-axis. What is the significance of the number $0.707$, and why is the overlap happening? My observations: My hunch is that lines $(1)$ & $(2)$ are wrong. However, I don’t want to explain my hunch because I fear that it might complicate matters unnecessarily. This might help you in answering the question: it contains the graphs of the original problem, the incorrect derivative found using $x=\sin\theta$, the incorrect derivative found using $x=\cos\theta$ & the correct derivative that can be found by differentiating exclusively using the chain rule.

  • Determining the period of $ \frac{\sin(2x)}{\cos(3x)}$
    by ZchGarinch on October 20, 2021 at 10:01 am

    I would like to compute the period of this function which is a fraction of two trigonometric functions. $$ \frac{\sin(2x)}{\cos(3x)}$$ Is there a theorem for this? what trick to use to easily find the period? I started by reducing the fraction but I’m stuck on the rest. For example, let $T$ be the period to be calculated: $$\frac{\sin(2x)}{\cos(3x)} =\frac{\sin(2x + 2 T)}{\cos(3x+3T)} = \frac{\sin(2x) \cos(2T)+\sin(2T) \cos(2x)}{\cos(3x) \cos(3T)-\sin(3T)\sin(3x)}$$ Thanks for your help.

  • Finding $\int_0^{\frac{\pi}{2}} \frac{x}{\sin x} dx $
    by Tavish on October 19, 2021 at 7:47 pm

    Is there a way to show $$\int_0^{\frac{\pi}{2}} \frac{x}{\sin x} dx = 2C$$ where $C=\sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)^2} $ is Catalan’s constant, preferably without using complex analysis? The following is an attempt to expand it as as a series: \begin{align*} \int_0^{\frac{\pi}{2}} \frac{x}{\sin x} dx &= \int_0^{\frac{\pi}{2}} \frac{x}{1-\cos^2 x}\sin x\ dx \\ &= \sum_{n=0}^{\infty} \int_0^{\frac{\pi}{2}} x\sin x \ \cos^{2n}x \ dx \\ &= \sum_{n=0}^{\infty}\frac{1}{2n+1} \int_0^{\frac{\pi}{2}} \cos^{2n+1}x \ dx \\ &= \sum_{n=0}^{\infty} \frac{4^n}{\binom{2n}{n}(2n+1)^2} \end{align*} which is close but not quite there.

  • Maximizing an angle based on certain constraints
    by C_Lycoris on October 19, 2021 at 5:17 pm

    $A (0,a)$ and $B(0,b)\; (a,b>0)\;$ are the vertices of $\triangle ABC$ where $C(x,0)$ is variable. Find the value of $x$ when angle $ACB$ is maximum. Now geometry’s never really been my strong point, so I decided to go with a bit of calculus. First, I used the sine rule: $$\mathrm{sinC=\frac{b-a}{2R}} $$ where R is the radius of the circumcircle. I note that for angle C to be maximum, sinC should be maximum. As such, R must be minimum. Next, I used the relation $$\mathrm{R=\frac{(b-a)\cdot\sqrt{x^2+b^2}\cdot\sqrt{x^2+a^2}}{2\Delta}} $$ where $\mathrm{\Delta \text{ is the area of }ABC=\frac{(b-a)x}{2}}$. A bit of comparatively lengthy differentiation gives me the value of $x$ as $\sqrt{ab}$. When I go through the solutions, it’s simply been stated: For angle ACB to be maximum, the circle passing through A,B will touch the X-axis at C. Beyond this, it’s been solved using the very simple $\mathrm{OC^2=OA\cdot OB}$, where O is the origin. So the above statement seems to be the difference between a lengthy differentiation and a one line solution. It’s getting a little difficult for me to see why the above statement should be intuitive. Could someone shed a bit more light on it for me, and possibly provide an intuitive proof?

  • Axioms and uniqueness for the Euler class
    by Jonas on October 19, 2021 at 4:15 pm

    In this question it was asked if the 4 properties listed on the wikipedia page uniquely characterise the Euler class. I answered no and claimed: For every oriented vector bundle $E\to X$ of rank $n$ there exists a unique class $e(E) \in H^n(X;\mathbb{Z})$ satisfying the following axioms: Naturality: For every $f:Y \to X$ we have $e(f^*E) = f^*e(E)$. Sum formula: If $W\to X$ is another oriented vector bundle we have $e(W\oplus V) = e(V)\cup e(W).$ Orientation: If $\bar{E}$ is $E$ with the opposite orientation, we have $e(\bar{E}) = -e(E)$. Normalisation: For the (real oriented) tautological bundle $\gamma^1 \to \mathbb{CP}^1$ we have $<e(\gamma_1),\;[\mathbb{CP^1}]> = -1$. I thought one could prove this using a real oriented splitting principle, that is for $E \to X$ we construct a map $f:Y \to X$ with \begin{align*}f^*E \overset{\sim}{=} P_1 \oplus … \oplus P_k \oplus \xi \end{align*} where the $P_i$ are oriented 2-plane bundles and $\xi = \underline{\mathbb{R}}$ and $k=(n-1)/2$ if $n$ is odd and $\xi = 0$ and $k=n/2$ if n is even. But Jack Lee pointed at some trouble. If $f^*:H^*(X) \to H^*(Y)$ is injective, I would be fine. But in the $n$ odd case, this cannot be true, since $e(f^*E) = 0$ while $e(E)$ can be non-trivial two torsion. In the even case there is Proposition III.11.2 of Spin Geometry by Lawson & Michelsohn which claims that $f^*$ is injective but their proof is not clear ( see this question by Jack Lee). So my question is, if anybody can give a proof of the uniqueness statement (or acounterexample, in the case they are not uniquely characterising the class)? Any proof is welcome, but my goal was to stay as close as possible to the corresponding proofs for the uniqueness of Stiefel-Whitney and Chern classes via splitting principles. Even without the injectivity of the pullback this could be doable.

  • Calculate $\lim_{n\rightarrow\infty}\frac{\int_{0}^{1}f^n(x)\ln(x+2)dx}{\int_{0}^{1}f^n(x)dx}$
    by Piquancy on October 19, 2021 at 8:38 am

    Given $$f(x)=1-x^2+x^3 \qquad x\in[0,1]$$ calculate $$ \lim_{n\rightarrow\infty}\frac{\int_{0}^{1}f^n(x)\ln(x+2)dx}{\int_{0}^{1}f^n(x)dx} $$ where $f^n(x)=\underbrace{f(x)·f(x)·\dots\text{·}f(x)}_{n\ \text{times}}$. This is a question from CMC(Mathematics competition of Chinese)in $2017$. The solution provides an idea: given $s∈(0,\frac{1}{2}),$ prove:$$\lim_{n\rightarrow\infty}\frac{\int_{s}^{1}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=0\\$$ The final result is $\ln2.$ My approach For this:$$\lim_{n\rightarrow\infty}\frac{\int_{s}^{1}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=0\\$$ I want to do piecewise calculation:$$\int_{s}^{1-s}f^n(x)dx+\int_{1-s}^{1}f^n(x)dx.$$For this:$$\lim_{n\rightarrow\infty}\frac{\int_{1-s}^{1}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=0.\\$$Here is the proof: when$\ \ n≥\frac{1}{s^2}$, $$\frac{\int_{1-s}^{1}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=\frac{\int_{0}^{s}(1-x^2(1-x))^ndx}{\int_{0}^{s}(1-x(1-x)^2)^ndx}\\\leq\frac{\int_{0}^{s}(1-\frac{x}{4})^ndx}{\int_{0}^{s}(1-x^2)^ndx}\leq\frac{\int_{0}^{s}(1-\frac{x}{4})^ndx}{\int_{0}^{1/\sqrt{n}}(1-\frac{x}{\sqrt{n}})^ndx}\\=\frac{\frac{4}{n+1}(1-(1-\frac{s}{4})^{n+1})}{\frac{\sqrt{n}}{n+1}(1-(1-\frac{1}{n})^{n+1})}\sim\frac{4}{\sqrt{n}(1-\frac{1}{e})}\rightarrow0.\\$$For this:$$\lim_{n\rightarrow\infty}\frac{\int_{s}^{1-s}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=0.\\$$Here is the proof: given $t,0<t<s<\frac{1}{2},$then$$f(t)>f(s)>f(1-s).$$Define $m_t=\min_{x\in[0,t]}f(x),M_s=\max_{x\in[s,1-s]}f(x),$ so$$m_t=f(t)>f(1-s)=M_s.$$$$\frac{\int_{s}^{1-s}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}\leq\frac{\int_{s}^{1-s}f^n(x)dx}{\int_{0}^{t}f^n(x)dx}$$$$\leq\frac{(1-2s)M_s ^n}{tm_t ^n}=\frac{1-2s}{t}(\frac{M_s}{m_t})^n\rightarrow0.\\$$ In conclusion,we can get:$$\lim_{n\rightarrow\infty}\frac{\int_{s}^{1}f^n(x)dx}{\int_{0}^{s}f^n(x)dx}=0.$$

  • What do we know about the intersection of co-countable and Euclidean topologies?
    by Asaf Karagila on October 18, 2021 at 10:09 pm

    The Euclidean topology on $\Bbb R$ is well-understood. It is the one generated by the open intervals (or even just the open intervals with rational end-points). To some extent, we also understand the co-countable topology, which is generated by the sets whose complement is countable. Easily we can see that the Euclidean topology is neither a subset nor a superset of the cocountable topology: $(0,1)$ is open in the Euclidean topology, but its complement is uncountable. $\Bbb Q$ is countable, but it is not closed in the Euclidean topology. So we can consider $\tau$ to be the intersection of these two topologies. Namely, $A\subseteq\Bbb R$ is a member of $\tau$ (i.e., open) if and only if it is empty or it is both co-countable and a countable union of intervals. So, for example, neither $(0,1)$ nor the irrational numbers are open in this topology, as remarked above. On the other hand, consider $A=\left\{\frac1{2^n}\mathrel{}\middle|\mathrel{} n\in\Bbb N\right\}\cup\{0\}$, then $\Bbb R\setminus A$ is open. Questions. Is there a nice way to describe $\tau$? Does it have a name? Are there any nice (non-trivial) properties of this topology?

  • Combined real + p-adic numbers?
    by Niklas on October 18, 2021 at 5:06 pm

    So most p-adic notes inevitably beat us with a club with Ostrowski’s theorem on absolute values. However what if we forsake absolute values, and instead use a digit system where the value $v(x)$ of a number $x$ equals a tuple $(a,b)$. Here $a$ is the smallest negative exponent of some prime $p$ in the representation of $x$, while $b$ is the smallest positive exponent. For instance for $p=2$, $v(9/4)$ = $v(1/4 + 2)$ = $(2,1)$. However we allow infinite expansions too: We define a sequence as ‘left-convergent’ if for $\lim v_2(x_n – x_{n-1})$ the first tuple value approaches infinity, and likewise for ‘right-convergent’. In other words, high positive or negative powers of a prime both converge to different zeros. And therefore we can write unconditionally convergent expressions of the form $\sum_{-\infty}^{\infty} p^n a_n$ where $a_n$ is between $0$ and $p-1$. So we basically obtain real (base $p$) plus p-adic ‘combined’ numbers. The number zero can e.g. be represented as either: $\cdots 0.0 \cdots$ or $\cdots (p-1)(p-1)(p-1).(p-1)(p-1)(p-1) \cdots$ Since the latter is divisible by all factors of $p-1$, for $p>2$ we aren’t dealing with a field (at least by the usual definition) as there are zero divisors. Though I don’t know what happens with $p=2$? Something like $\frac{1}{x}$ for $(x,p)=1$ will have multiple solutions. With the ‘new’ solutions corresponding to linear combinations of the real and p-adic solution. E.g. $\frac{1}{3}$ for $p=2$ has representations: $0.010101 \cdots$ (real) $\cdots 0101011;$ (2-adic) $\cdots 010101.1010101 \cdots$ (sum of both divided by two) This sort of feels like ‘field extension’ since we are adding new solutions to an equation. In these numbers, series like $\displaystyle \sum_{n=0} p^{(-1)^n n}$ converge to finite values. Also power series converge absurdly: $\sum_{n=-\infty}^{\infty} x^n = 0$ for all $x$ with a power of $p$ in the numerator or denominator. How about products? (responding to Julian Rosen). In general we can use Cauchy products if at least one of the numbers has a finite representation. However multiplying an infinite p-adic and infinite real number together causes convergence issues. If we denote $\frac{1}{x}_p$ and $\frac{1}{x}$ for the p-adic and real representatives of a fraction. Then $\frac{1}{x}_p \cdot \frac{1}{x} = \frac{1}{x}_p \cdot \frac{x}{x^2} = \frac{1}{x^2} = \frac{x}{x^2}_p \cdot \frac{1}{x} = \frac{1}{x^2}_p$. Hence by implication there can’t be a ‘single’ representative for this product. My questions are: Does this approach make sense/does it have a name? Does it have any benefits or unique advantages? E.g. tying p-adic results to real ones? For instance ‘hypothetically’ if we can demonstrate that the ‘combined’ expression (real + p-adic) is irrational and the real solution is rational, then the remaining term must be irrational.

  • A product over the characters of a finite abelian group
    by Hetong Xu on October 18, 2021 at 3:15 pm

    I’m comming up with the following problem in my algebraic number theory course: Problem: Let $G$ be an abelian group (the operation is denoted as multiplication) of order $fg$. Let $a \in G$ be such that the order of $a$ is $f$. Prove that $$ \prod_{\chi \in \widehat{G}}(1-\chi(a)T) = (1-T^f)^g, $$ where $\widehat{G}$ is the group consists of all multiplicative characters $\chi: G \rightarrow \mathbb{C}^{\times}$. We know that $G$ is canonically isomorphic to $\widehat{G}$. Question: How to prove this? Attempts: I’m trying to expand both sides and compare the coefficient. The right hand side is direct by binomial theorem: $(1-T^f)^g = \sum_{k=0}^{g}\binom{g}{k}(-T)^{kf}.$ On the left hand side, the coefficient of $(-T)^n$ is $$ C_n := \sum_{1 \leq i_1 < i_2 < \cdots < i_n \leq fg} \chi_{i_1}(a) \chi_{i_2}(a) \cdots \chi_{i_n}(a). $$ Comparing the coefficients, I’m trying to prove: Claim: $C_n = \binom{g}{m}$ when $n=fm$ for some $m \in \mathbb{Z}_{\geq 0}$, and $C_n = 0$ when $f \nmid n$. A special case: when $n=1$ and $a \neq 1_G$, $C_1 = 0$ by the orthogonality of characters: $\sum_{\chi \in \widehat{G}} \chi(g) = 0$ if $g \neq 1_G$. So to prove the claim, I also tried to imitate the proof of this orthogonality relation. Proof: Let $g \in G-\{1_G\}$, then consider the group $G^{\prime}$ generated by $g$. Then $|G/G^{\prime}| < n$. Consider $H = \{\chi \in \widehat{G}: \chi(g)=1\}$, then for any $\chi \in H$, $\ker \chi \supset G^{\prime}$, hence $\chi$ induces $\widetilde{\chi}: G/G^{\prime} \rightarrow \mathbb{C}^{\times}$. Moreover, different characters in $H$ induces different characters on $G/G^{\prime}$. Hence $$ |H| \leq |(G/G^{\prime})^{\wedge}| = |G/G^{\prime}| < n = |G| = |\widehat{G}|. $$ Hence $H \subsetneq \widehat{G}$ and therefore, there exists $\psi \in \widehat{G}$ such that $\psi(g) \neq 1$. Therefore $$ \sum_{\chi \in \widehat{G}} \chi(g) = \sum_{\chi \in \widehat{G}} \psi \chi(g) = \sum_{\chi \in \widehat{G}} \psi(g) \chi(g) = \psi(g) \sum_{\chi \in \widehat{G}} \chi(g). $$ As $\psi(g) \neq 1$, the only chance is that $\sum_{\chi \in \widehat{G}} \chi(g)=0$. Inspired by this, I’m trying to consider the order $f$ subgroup generated by $a$ in $G$. But I got stuck here and not knowing how to carry on. Further question: since the notations here, especially $f$ and $g$ here is also used in the decomposition of primes in number fields (where $f$ is the inertia degree and $g$ is the number of distinct prime ideals in the decompostion of $\mathfrak{p} \subset \mathcal{O}_K$ in $\mathcal{O}_L$.) So just a wild guess, does this have any background on some more deeper results or useful tricks, or some relations to decomposition of primes? Where the result maybe used in number theory? Sorry for such a long post and thank you all for commenting and answering! 🙂

  • Angles between lines in $3$D space
    by Dotman on October 18, 2021 at 8:46 am

    Suppose I have two lines in $3$D space passing through origin. The smallest angle formed between them would be between $0$ and $\pi/2$. Minimizing the cosine of this angle we’ll get $\cos {(\pi/2)}=0$. For $3$ lines there will be in total $3$ angles between them. Let’s again suppose these angles are between $0$ and $\pi/2$. The minimum of the sum of the cosines of these angles will be $0$; when each vector is $pi/2$ away from the other two. But for $4$ lines and beyond, every angle cannot be made $\pi/2$ in $3$D space. I need to find the minimum of the sum of cosines for $n$ number of lines. For example, let’s take the case of $4$ lines. There are $6$ angles formed between them. I tried to minimize the sum of cosines numerically and got the value as $1$. This is the case when $3$ lines are perpendicular, and the fourth vector is along one of those $3$. For $5$ lines, the minimum is $2$; lines double up in two of the axis and one vector perpendicular to them. For $6$ lines, the minimum is $3$; two lines along each axis. From numerical results, it seems that the minimum will be when the lines are along the $3$D axes, but I don’t yet have proof for this. Can proof be made through induction? Kindly help out in any way possible.

  • Is $\sum_{a=0}^m\sum_{b=0}^n\cos(abx)$ always positive?
    by Thomas Browning on October 18, 2021 at 2:54 am

    Fix integers $m,n\geq0$. Do we have the inequality $\displaystyle\sum_{a=0}^m\sum_{b=0}^n\cos(abx)>0$ for all $x\in\mathbb{R}$? We can also write this function as \begin{align*} \sum_{a=0}^m\sum_{b=0}^n\cos(abx)&=m+n+1+\sum_{a=1}^m\sum_{b=1}^n\cos(abx)\\ &=m+n+1+\sum_{a=1}^m\frac{1}{2}\left(\frac{\sin((n+1/2)ax)}{\sin(ax/2)}-1\right)\\ &=\frac{m}{2}+n+1+\frac{1}{2}\sum_{a=1}^mD_n(ax), \end{align*} where $$D_n(x)=\frac{\sin((n+1/2)x)}{\sin(x/2)}$$ is the Dirichlet kernel (up to a factor of $2\pi$, depending on your convention). Using this formula, it is easy to check the conjecture for small values of $m$ and $n$ (desmos link).

  • Is the space $\mathcal C([0,1])$ endowed with the sup norm homeomorphic to $\mathcal C([0,1])$ endowed with the integral norm?
    by José Carlos Santos on October 17, 2021 at 2:36 pm

    Consider the space $\mathcal C([0,1])$ of all continuous functions from $[0,1]$ into $\Bbb R$. A norm which is natural to use in this space is the sup norm: $\|f\|_\infty=\sup|f|$; another one is the integral norm: $\|f\|_1=\int_0^1|f|$. Are $\bigl(\mathcal C([0,1]),\|\cdot\|_\infty\bigr)$ and $\bigl(\mathcal C([0,1]),\|\cdot\|_1\bigr)$ homeomorphic? My guess is that they are not, but I am unable to prove it. It is clear that these metrics are not equivalent. And, of course, since $\bigl(\mathcal C([0,1]),\|\cdot\|_\infty\bigr)$ is a complete metric space, whereas $\bigl(\mathcal C([0,1]),\|\cdot\|_1\bigr)$ isn’t, there is no bijection $f\colon\bigl(\mathcal C([0,1]),\|\cdot\|_\infty\bigr)\longrightarrow\bigl(\mathcal C([0,1]),\|\cdot\|_1\bigr)$ such that both $f$ and its inverse are uniformly continuous, but this doesn’t prove the impossibility of the existence of a homeomorphism.

  • An uncommon continued fraction of $\frac{\pi}{2}$
    by Limerence Abyss on October 17, 2021 at 10:24 am

    I’m currently stuck with the following infinite continued fraction: $$\frac{\pi}{2}=1+\dfrac{1}{1+\dfrac{1\cdot2}{1+\dfrac{2\cdot3}{1+\dfrac{3\cdot 4}{1+\cdots}}}}$$ There is an obscure clue on this: as one can derive the familiar Lord Brouncker’s fraction below $$ \frac{4}{\pi}=1+\dfrac{1^{2}}{2+\dfrac{3^{2}}{2+\dfrac{5^{2}}{2+\dfrac{7^{2}}{2+\cdots}}}} $$ from the Wallis’ Formula: $$ \dfrac{2}{\pi}=\frac{1 \cdot 3}{2 \cdot 2} \cdot \frac{3 \cdot 5}{4 \cdot 4} \cdot \frac{5 \cdot 7}{6 \cdot 6} \cdot \frac{7 \cdot 9}{8 \cdot 8} \cdots $$ the first fraction can be proved in the same manner. However, I’m not getting any close to it using the Wallis’ Formula. Really appreciated if anyone could point me the right direction or explain further how to systematically derive those continued fractions from any given convergent cumulative product.

  • Are there any books written using dialogues?
    by Philippe Lafourcade on October 16, 2021 at 10:06 pm

    Last year I read Questions And Answers In School Physics. The book is based on a dialogue between a student and a teacher. A lot of concepts and ideas are seamlessly driven by the dialogue. I found the presentation to be lucid and pedagogical. I am wondering if there is books with a similar style but for mathematics.

  • Wave equation: predicting geometric dispersion with group theory
    by Sal on October 16, 2021 at 7:35 pm

    Context The wave equation $$ \partial_{tt}\psi=v^2\nabla^2 \psi $$ describes waves that travel with frequency-independent speed $v$, ie. the waves are dispersionless. The character of solutions is different in odd vs even number of spatial dimensions, $n$. A point source in odd-$n$ creates a disturbance that propagates on the light cone and vanishes elsewhere: if the point source is a flash of light, an observer sees darkness, then a flash, then darkness. When $n$ is even, a disturbed media never returns to rest: the observer sees darkness, then brightness that lingers for all $t$. This phenomena is known as geometric dispersion. Question Is it possible to show that geometric dispersion is predicted by the wave equation, using group theory? For a point source at the origin, we would be searching for spherically symmetric solutions, and the rotation group $SO(n)$ has a different structure depending on whether $n$ is odd or even. In particular, I am interested in doing this without actually solving the wave equation. Unfortunately, I don’t know enough group theory to know if this is even possible. What I know I can ‘show’ geometric dispersion by solving the wave equation with an initial condition, or computing the Green’s function for the wave equation and noting that it is either supported only on the light cone (odd $n$), or everywhere within the light cone (even $n$). I know some group theory ‘for physicists’. Related This unanswered question is similar. I think my question is more specific: I’m asking about a way to predict (rather than explain) geometric dispersion using group theory.

  • Sum of angles under which a fixed line segment is seen from points situated on another line segment
    by Khosrotash on October 16, 2021 at 6:12 pm

    I have a question, like a picture attached below. I can find each angle by sin( or cosine) rule, but I think there is an easy way…a clue …a concept which made it easy. can someone help me? I do appreciate any hint. for example to find $A$ I use $$BC=\sqrt 2, AC=6 , AB=\sqrt {26}\\\cos(A)=\frac{c^2+b^2-a^2}{2bc}=\frac{36+26-2}{2*6*\sqrt{26}}$$ then find $A=11.3099$ and do like this for all the angles. But it is not the satisfying method. ( the gray squares are equal) Thanks in advanced.

  • Show that there exists a $c : f(c) = g(c)$
    by Tanamas on October 16, 2021 at 4:20 pm

    I’ll try to present a solution for this problem, and I hope I can receive feedback on what went wrong, if something went wrong of course. Let $f, g : [a, b] \to \Bbb R$ be continuous functions and $\int_{a}^{b} f(x) dx = \int_{a}^{b} g(x) dx$. Show that there exists $c \in [a, b]$ such that $f(c) = g(c)$. Solution Let’s define $h(x) = \int_{a}^{x}f(x)dx-\int_{a}^{x}g(x)dx$ $h(x)$ is continous, since $f(x)$ and $g(x)$ is continous. I hope this argument is correct. We see $h(a) = h(b) = 0$. Applying Rolle’s Theorem, we get that $\exists \xi \in (a,b) : h'(\xi) = 0$ In other terms, $f(\xi) = g(\xi)$ $\square$ Thanks!

  • How to prove $\frac{ab^2}{1+2b^2+c^2}+\frac{bc^2}{1+2c^2+a^2}+\frac{ca^2}{1+2a^2+b^2} \le \frac{3}{4}$ if $a+b+c=3$
    by Modern_Hunter on October 16, 2021 at 2:59 pm

    $a,b,c\ge 0,a+b+c=3.$ Prove: $$\frac{ab^2}{1+2b^2+c^2}+\frac{bc^2}{1+2c^2+a^2}+\frac{ca^2}{1+2a^2+b^2} \le \frac{3}{4}$$ This problem was found in this post. As you can see, no one in that post gave a correct proof but someone pointed out that this inequality might be a problem from Mathematical Reflections. However, the archive of the journal can’t be downloaded. Therefore, we must find our own solution. I tried it myself surely. And because all variables show up in a single fraction, we are hard to use technique like tangent line. I tried to homogenise it, and it turned out to be (in case you didn’t understand, this triangle denotes the coefficients of ever term of the polynomial. From the left-top is coefficient of $a^7$, and right-top is that of $b^7$, and the bottom is that of $c^7$. And for instance, the “$762$” on the second term of the second line denotes the coefficient of $a^5bc$) which is almost impossilbe to create a proof directly from it by hand.(Sure, since expanding is not always a perfect way) Can you come up with a solution with wit?

  • A Golden Angle Conjecture
    by Robert Mashlan on October 16, 2021 at 2:42 pm

    Update: this conjecture is broken, for n=9. This conjecture is related to the process of phyllotaxis in plants, and understanding why nature would choose to iterate the Golden Angle for this process, after a sufficiently long period of evolution. Essentially, I want to know if the Golden Angle is ideal for distributing plant structures around a 2-d radial point of origin at low iterations. For instance, if you want to distribute three seeds around a circle, iterating the Golden Angle three times will place a seed in each of the thirds of a circle starting from the angle origin. You want to add another seed, so you simply iterate another golden angle, placing the four seeds in each quarter of a circle, and so on. My conjecture is That for every integer $n > 0$, and for each integer $i$ in $[1…n]$ then $i$ maps to one unique integer $j$ from $[1…n]$ where $\frac {i-1} n<(j\theta)\mod 1<\frac {i}n$ is true, when $\theta = \frac{3 – \sqrt 5}2$ (acute golden angle) or $\theta = \frac{\sqrt 5 – 1}2$ (obtuse golden angle) In other words, if every value $j$ from $[1…n]$ is iterated, $j\theta $ will determine each of the $n$ regular sectors of the circle only once. note: The angle being discussed here is in units of turns of a circle, and has nothing to do with $\pi$, other than considering this angle to be the distance traveled on the circumference a circle with $r=\frac{1}{2\pi}$. note: Equality in some of the ordered comparisons could be used, but the assumption is that $\theta$ is some irrational number that can never be equal to a rational number, because this would obviously fail when the denominator of rational $\theta$ is a multiple of $n$. Q1: Is there an easy proof of this conjecture, or derivation from another proof? Q2: Is there another value of $\theta$ that this conjecture is also true? Q3: Making a model where some specific number of seeds/structures is geometrically more likely than other adjacent numbers, could evolution chose a slightly different iterative angle that does a better job? Notes: One simple verification of this conjecture for n=10 is to observe the mod 1 residues of 10 iterations of the obtuse Golden Angle on a calculator will cycle through all 10 digits: (and for the most arithmetically nimble of mind, you can also use this list to verify for n=[1..9] as well. ) 0.x cycles through all 10 digits: acute obtuse j=1 0.3819660113 i=4; 0.6180339887 i=7 j=2 0.7639320225 i=8; 0.2360679775 i=3 j=3 0.1458980338 i=2; 0.8541019662 i=8 j=4 0.5278640450 i=6; 0.4721359550 i=5 j=5 0.9098300563 i=9; 0.0901699437 i=1 j=6 0.2917960675 i=3; 0.7082039325 i=8 j=7 0.6737620788 i=7; 0.3262379212 i=4 j=8 0.0557280900 i=1; 0.9442719100 i=10 j=9 0.4376941013 i=5; 0.5623058987 i=6 j=10 0.8196601125 i=9; 0.1803398875 i=2 The Equi-distribution theorem is important to this conjecture, which states: that the sequence a, 2a, 3a, … mod 1 is uniformly distributed on the circle R/Z, when a is an irrational number The Three-Gap theorem, formerly the Steinhaus conjecture, proved in 1950, is possibly important to this, which states: if one places n points on a circle, at angles of θ, 2θ, 3θ … from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the largest of the three always equals the sum of the other two. Intriguing is Hurwitz’ Irrational Number theorem, which considers the golden ratio ‘bad’ in terms of tightening up Lagrange’s Irrational number theorem for rational approximations of irrational numbers. I have not yet read the proof.

  • Expectation of Brownian motion at a stopping time depending on a brownian motion
    by finquant75 on October 16, 2021 at 2:11 pm

    I have difficulties with a very concret example in stochastic calculus. Let $B$ and $W$ be two independent Brownian motions on a filtration $(F_t)_{t\geq0}$ and let be $\lambda = 1 + \exp(-B^2_1)$ a stopping time. Compute $\mathbb{E}[B_\lambda]$, $\mathbb{E}[B_\lambda^2]$, $\mathbb{E}[W_\lambda]$ and $\mathbb{E}[W_\lambda^2]$. I firsted started to write that $\lambda$ is a stopping time so $\mathbb{E}[W_\lambda]=\mathbb{E}[W_0]=0$ and $\mathbb{E}[W_\lambda^2] = \lambda$ by definition of the brownian motion but I have really no intuition when computing the expectation of the brownian motion B at time $\lambda$ because it depends on $B_1$ so it can’t be the same result than for $W$.

  • Expected number of collinear points in non-repeating random walk (from XKCD)
    by Milten on October 16, 2021 at 12:10 pm

    Today’s XKCD-comic (see below) featured a couple of joke open math problem, and one of them caught my interest (being the only one that’s an actual mathematical question!): If I walk randomly on a grid, never visiting any square twice, placing a marble every $N$ steps, on average how many marbles will be in the longest line after $N\times K$ steps? Here is how I interpret it formally: Let $N,K\in\Bbb N$, and consider a finite-length symmetric random walk $X=(X_0, X_1, \ldots, X_{NK-1})$ of length $NK$ on the plane grid. Let $Y_i = X_{iN}$ for $i\in\Bbb N_0$. $Y_i$ is the position of the $i$’th marble. What is the expected cardinality of the maximum set of collinear $Y_i$, given that $X_j\ne X_k$ for all $j\ne k$? To my eyes this looks pretty complicated; I can’t tell whether it is feasible or not. I would love to see any analysis on this, even if a complete answer is out of reach. Feel free to simplify the problem too, if it leads to something interesting. For example we could: Drop the condition that the random walk be non-repeating. Look at an infinte random walk instead (i.e. set $K=\infty$). Set $N=1$. Look at the expected number of collinear marbles in some sense, instead of looking at the maximal collinear set? (Perhaps taking expectation over lines that go through two marbles… idk). … To be clear, I am not asking all these things at the same time! I am just looking for any interesting discussion on the problem.

  • Show that $\left(\frac{x_1^{x_2}}{x_2}\right)^p+\left(\frac{x_2^{x_3}}{x_3}\right)^p+\cdots+\left(\frac{x_n^{x_1}}{x_1}\right)^p\ge n$ for any $p\ge1$
    by TheSimpliFire on October 16, 2021 at 10:32 am

    The inequality $\sqrt{\frac{a^b}{b}}+\sqrt{\frac{b^a}{a}}\ge 2$ for all $a,b>0$ was shown here using first-order Padé approximants on each exponent, where the minimum is attained at $a=b=1$. By empirical evidence, it appears that inequalities of this type hold for an arbitrary number of variables. We can phrase the generalised problem as follows. Let $(x_i)_{1\le i\le n}$ be a sequence of positive real numbers. Define $\boldsymbol a=\begin{pmatrix}a_1&\cdots&a_n\end{pmatrix}$ such that $a_k=x_k^{x_{k+1}}/x_{k+1}$ for each $1\le k<n$ and $a_n=x_n^{x_1}/x_1$. How do we show that $$\|\boldsymbol a\|_p^p\ge n$$ for any $p\ge1$? As before, AM-GM is far too weak since the inequality $\displaystyle\|\boldsymbol a\|_p^p\ge 2\left(\prod_{\text{cyc}}\frac{x_1^{x_2}}{x_2}\right)^{1/{2p}}$ does not guarantee the result when at least one $x_i$ is smaller than $1$. We can eliminate the exponent on the denominator by taking $x_i=X_i^{1/p}$ so that $\displaystyle\|\boldsymbol a\|_p^p=\sum_{\text{cyc}}\frac{X_1^{X_2^{1/p}}}{X_2}$ but the approximant approach no longer becomes feasible; even in the case where $p$ is an integer the problem reduces to a posynomial inequality of rational degrees. Perhaps there are some obscure $L^p$-norm/Hölder-type identities of use but I’m at a loss in terms of finding references. Empirical results: In the interval $p\in[1,\infty)$, Wolfram suggests that the minimum is $n$ (Notebook result) which is obtained when $\boldsymbol a$ is the vector of ones. However, we note that in the interval $p\in(0,1)$, the empirical minimum no longer displays this consistent behaviour as can be seen in this Notebook result. The sequence $\approx(1.00,2.00,2.01,3.36,3.00,4.00)$ appears to increase almost linearly every two values, but I cannot verify it for a larger number of variables due to instability in the working precision.

  • Existence of orthonormal basis for $L^2(G)$ in $C_c(G)$.
    by Calculix on October 15, 2021 at 8:52 pm

    Suppose that $G$ is a locally compact (Hausdorff) group endowed with the Haar measure. It is well-known that the compactly supported functions $C_c(G)$ are dense in $L^2(G)$. In the book “Operator algebras, theory of $C^{*}$-algebras and von Neumann algebras” written by Bruce Blackadar it is claimed (without proof) that $L^2(G)$ admits an orthonormal basis contained in $C_c(G)$. I didn’t immediately see why this is true. So I started to look for an argument and encountered an MSE-post, which shows that it is not always possible to find an orthonormal basis for a non-separable Hilbert space in a given dense subspace. I know that Blackadar’s claim is true in the following cases: If $G$ is second countable, then $L^{2}(G)$ is separable. So one can use the Gram-Schmidt procedure to find an orthonormal basis in $C_{c}(G)$. If $G$ is compact and abelian, then $\widehat{G}$ (= Pontryagin dual) is an orthonormal basis for $L^{2}(G)$. Note that $\widehat{G}$ can be viewed as a subset of $C_{c}(G)$ in this case. But does anyone know why Blackadar’s claim is true (or false) for general $G$? Or does anyone know a reference for this?