Rigour in mathematics

Mathematics is very rigorous and everything must be proven properly even things that may seem true and obvious.

Can you give me examples of conjectures/theories that seemed true but through rigorous mathematical proving it was shown otherwise?

Solutions Collecting From Web of "Rigour in mathematics"

Finding the roots of a linear polynomial is trival. Already the Babylonians could find roots of quadratic polynomials. Methods to solve cubic polynomials and forth degree polynomials were discovered in the sixteenth century, all using radicals (i.e. $n$th roots for some $n$).
Isn’t it obvious that finding the roots of higher degree polynomials is also possible using radicals and that we have not found the formulas yet is only because they become more and more complicated with higher polynomial degrees?

Galois theory shattered this belief.

Berry’s Phase was discovered after a lack of rigor in the proof of the Adiabatic theorem was discovered around 1980. It now appears in standard Quantum Mechanics texts and has produced at least 3000 papers since 1980 (actually, that number is about 10 years old, I’m not sure how many by now).

While this appears in physics, the mistake was mathematical. In particular, topological. There is an integration over parameter space in the proof which is assumed to be trivial. That would be fine if the parameter space was one-dimensional, but for higher-dimensional parameter spaces there might be a singularity in the domain which obstructs the vanishing of the integral. Once this mistake was uncovered, physicists then gained insight from the corrected theorem to modify phase with ease. It would be exciting to see similar developments in other physical arenas where adhoc mathematics is utilized. However, it seems this is the abberation from the norm. Much like the case with math. I think it’s fair to say that most often the heursitic proof has turned out to be correct once the details are fleshed out. This is why this thread is interesting.

First thing that comes to mind: not every smooth function is equal to its Taylor series over the series’ region of convergence. As a counterexample, consider the function
f(x) =
0 & x=0\\
\exp\left(-\frac1{x^2}\right) & x\neq0
whose Taylor series centered at $x=0$ is simply $0$, with an infinite radius of convergence.

I think the simplest example is the answer to the question:

Are there more rational numbers or natural ones, or is there equally many of those?

Intuition says that “of course there are much, much more rationals”. However, rigorous mathematical proof shows that there are exactly the same number of each.

The one that currently bugs me is that exponentiation is Diophantine.

This means that there exists an integer polynomial (so, no variables in the exponents) $P(x,y,z,w_1,\dots,w_n)$ such that:

$$\forall x,y,z\in\mathbb N\,\left(z=x^y \iff \exists w_1,\dots,w_n\in\mathbb N\,\left(0=P(x,y,z,w_1,\dots,w_n)\right)\right)$$

I’ve read the proof. I believe the proof is correct. I still don’t instinctively believe the result.

One of the surprising results of this is that first order number theory only needs to have multiplication and addition – you can still answer questions about exponentiation using the above polynomial.

Maybe you should also consider statements that are obviously true and are indeed true, but their proof is far from straightforward. The Jordan curve theorem surely falls under this category.

The Axiom of Choice was believed to be true (in $\sf ZF$), but it turned out be be independent from $\sf ZF$.
It might be good to mention that not being true, doesn’t necessarily imply being false.

Not really an answer, but here you go anyway: the number of all Turing machines is countable. Thus, no matter the encoding, almost all real numbers are not computable.

Using constructivist mathematics, which allows us to define a function the way we intuitively think about them,

a function $f:A \to B$ is a procedure assigning for every value in $A$ a unique value in $B$

we can prove rigorously that all functions are continuous. A shockingly surprising statement. It turns out that all of our classic discontinuous functions aren’t actually computable.

It might seem obvious that every topological manifold has a triangulation; I mean manifolds are just spaces which are locally Euclidean, so we should be able to split up the manifold enough in to compact areas which are homeomorphic to disks, and then triangulate from there right?.

In fact, for dimensions 4, there are non-triangulable manifolds such as the $E_8$ manifold. In higher dimensions, the problem may still be open (although a recent preprint of Ciprian Manolescu may have proved that there are non-triangulable manifolds in all dimensions greater than $4$ as well – a great early review of the result and its implications can be seen here).

Intuitively, it’s easy to speak about volumes or areas of sets (in $\mathbb R^n$), like e.g. “the smaller a set, the smaller it’s volume” which would lead to the statement

$$A \subset B \Rightarrow vol(A) \leq vol(B)$$

Our error is in fact that we imply that all sets actually had some well-defined volume. On precisely defining what we understand as a measure $vol(\cdot)$, it turns out that we can’t define one for all subsets as there are non-measurable sets.

Statements like the above are actually only valid given that the sets we talk about are measurable.

We can think there is universal set. But, mathematicians proved it isn’t a set. I think existence of universal set might not be seemed obviously true for you. But for a non-mathematician obviously there is a universal set! We can think that, mean of “obvious” gets shape with people, and for the man who proved the theorem it isn’t any longer obvious.

This is an example of a statement that seems true and turns out that its true (like one of Hagen von Eeitzen’s answers).

Let $p(x)\in\mathbb{R}[x] $, then $p(x)=q_1(x)q_2(x)…q_n(x)$ for
some polynomials $q_1,q_2,…,q_n\in\mathbb{R}[x]$ such that $\deg q_i\leq 2$.

Before I knew its proof, I never thought that this fact was obvious. But I know a lot of people who think that it’s obvious (they don’t know the proof though, they just think its obvious!).

Here is a case where you can easily be misled by working out some early examples. All groups up to order 60 are solvable (a nice definition of this available on Wikipedia) — you can show that by checking them all out. With 59 consecutive examples to start with, you might think that all groups are solvable. But it’s not true. There is at least one group of order 60 which is not solvable. This is directly connected to the insolubility in radicals of polynomial of degree ≥ 5; so it would be important to get it right.

Most useful example in this context may be considered Integration. Riemann Integration is used everywhere. Consider simplest function $f(x) = x$. Its antiderivative is $\frac{x^2}{2} + c$, definite integral can be calculated also very easily. But there is a theoretical part behind that, which is very much necessary for a logical understanding.

This will give you some idea of how deceiving and tantalizing even relatively simple things may be. The Wikipedia link tells the temporal history well enough but to appreciate the work that went into the proofs, you have to read at least one original paper…