National Naturalism
  1. The original sin of cooking
  2. The true meaning of love
  3. The truth of our origins
  4. Society
Biblio­graphy
Contact us
Skip navigation

The truth of our origins

Uncovering our past

The true history of Europeans, from the late Paleolithic circa 40 000 BCE to the start of agriculture is much older and more dramatic than what is taught by the mainstream, or alternative circles for that matter. It is a complex story of abrupt degeneration of superior species, humans endowed with extended lifespan and wondrous abilities, and a whole unknown cycle of civilization lasting for tens of thousands of years, that preceded our own and the start of agriculture.

We, Neandertals

The Out-Of-Africa nonsense

We are told through every media outlet that humans, the current one, descended from African apes from 6 to 10 million years ago, along with a plethora of other semi-human lineages that branched off, merged with ours or eventually died, until we emerged victorious in the great evolutionary contest. The earliest part of the story, we do not criticize. When and where exactly we separated from bonobos, how long we wandered in Africa, is but of a mild intellectual interest at best.

It is likely hominids lived in Europe several millions of years ago, but disappeared or migrated back. Before the brain approached 1000cc a million years ago, we consider the debate moot, as we were not human by any stretch of the word.

Debate is warranted when men emerge with brain size at least relative to the lower end of the spectrum today: then the establishment claims that modern man left Africa around 50,000 B.C. to conquer all of Asia and Europe, mostly replacing and driving to extinction the species of men (called “primitive” or archaic) already present, namely the Neandertals and Denisova. To then become fully modern, but strangely enough only outside Africa. Africans morphologically showed very little morphological change, none in their mean brain size these last 100 000 years, a fact rarely mentioned.

Homo Sapiens (whose closest descendants today are the Bushmen, whose IQ barely reach 70) in very small numbers supposedly migrated all the way to Europe in the heart of the Ice Age, to kill or outbreed in a few thousand years a native species much stronger physically, with a bigger brain than any existing population, that adapted to life in the North (some were found in the South of Siberia) and survived several successive glaciation cycles.

Illustration of Jews supporting mass immigration financially
Oy Vey goy, shut up and give your daughters and wives to migrants!
Meme we are all children of immigrants

This myth used ad nauseam justifies all migratory invasions, the replacement of white populations by hordes of Negroes and generalized race mixing. The same ideologues and media repeat night and day that races do not exist, but the white one is decidedly too big and homogeneous or downright evil. The same ones, claiming we are genetically wired to find physical differences more attractive or that mixed individuals - enriched racially - are healthier.

The following questions dismantle the Out-Of-Africa baseless narrative:

But somehow after an epic race war against these Übermensch, Sapiens fell into an eternal slumber of the mind from which they haven’t woken up. Still today the Negro can not survive Northern countries without vitamin supplements and other artifices, how could he have conquered all of Eurasia and displaced the superior natives ?

To see Cheddar Man with his dark skin it definitely provoked quite an emotional response in me, and I think that’s the power of this. It’s one thing to know that there were black people here thousands of years ago and to know that White people weren’t always White. We know there were Africans here before there were English people here, for example, and so through that that gives you a sense of the idea that there’s this indigenous British person who is White and essentially British is a fiction, it’s a narrative that was created over time, it’s not based on scientific facts so this is another feature of that really.

Afua Hirsch, mixed Guanian-Jewish-British Descent National Geographics

This was thoroughly debunked. The study relied on incomplete genetic data, with 60% of the chromosome markers needed for accurate skin tone estimation missing, making the prediction statistically unreliable (probability <50%). The initial DNA samples were also potentially contaminated by modern DNA during collection in 1996, which was never peer-reviewed. The study used broad categories like “dark” skin without accounting for nuances in ancient pigmentation. Later research identified additional genes influencing skin color that weren’t considered in the original analysis. Yes, as ridiculous as it sounds, without a perfect sample—many chromosomes are missing—current models can not tell the difference between brown people and freckled redheads. The later possibility makes more sense considering Cheddar man’s blue eyes.

See the Cheddar man fiasco portraying early-Neolithic Europeans as negroid
Evidences Cheddar man fiasco

Science by and large and all media only exist to serve Jewish interests and destroy European identity. Criticism is smothered if not reputations destroyed. Their strategy is to simply drown the media and institutions in lies, so many that no one has both the knowledge and time to disprove every single one.

Back to Neandertals: Systematically the goal post is shifted to earlier and earlier times, which is incredibly convenient, as DNA recovered for that time is unusable. There were none because all European “Sapiens” are Neanderthals, which incidentally had the unique traits of blue/green eyes and red hair, now nearly only associated with Europe. The molecular clock method is absolutely inadmissible for such a timespan because DNA so old is nowhere near intact enough or exploitable enough, so proving or disproving their claims this time will stay forever impossible.

The anthropological arguments fail, and “experts” in genetics time and time again have proved to falsify and lie about results and said repetitively that the purpose of their work is to fight “racism” by any means necessary (David Reich). And to these people, we entrust the remains of our ancestors and ask what to believe ? We must reject “authorities” and learn to think on our feet.

Mitochondrial Eve

Technically, the two most salient points argued by proponents of the Out-of-Africa viewpoint, are the following:

These computations are obtained by running simple algorithms and formulas on respectively the whole of the mitochondrial DNA (16.5kb) for the female line and a digested Y chromosome for the male line, because these two genetic elements are inherited strictly through the parent of their respective sex with no contribution from the other parent.

The notion of molecular clock that underlies these results, hangs on the idea is that the speed of divergence of sequences not subject to selection would diverge exactly reflect the random accumulation of chance mutations at the average mutational rate.

African DNA’s higher diversity (its wider range of various haplotypes found elsewhere), and the fact it diverged less from chimpanzees does suggest that Africans are indeed closer to our common ancestor, or said otherwise, that these ancestors at some point were indeed Africans, the real question is when. Firstly the idea of a mitochondrial Eve (retracing the female MRCA) is not theoretically sound, and regularly be it in paleontology or archeology physical evidences contredict predictions from genetics. Using mitochondrial DNA in particular has proved a terrible idea, due to its wildly erratic mutation rate variation, patterns of change decoupled from nuclear DNA, that that here too the neutral assumption is unfounded.

The Maximum Genetic Diversity (MGD) theory has superseded neutral theory. It posits an inverse relationship between genetic diversity and epigenetic complexity. According to this axiom, as organisms evolve greater epigenetic complexity (e.g., regulatory networks, developmental programs), their genetic diversity decreases in a punctuated manner. This occurs because epigenetic constraints limit the permissible genetic variation, effectively reducing the number of places available for mutations, including adaptive ones, places now taken by epigenetics-related sequences that mostly accumulate down the phylogenetic tree.

These unchanged positions consist of two types. The first type includes the positions essential for the barebone or minimal function of a gene, whose change is incompatible with the biochemical activity of the gene in a test tube. The second type includes the positions essential for more complex species but not for simple species. As species become more complex, more positions in a gene will become unchanged or involved in more complex traits. Changes in these positions may not alter biochemical function or activity in a test tube, or even short-term phenotypes in living organisms but may reflect taxon-wide ecological constraints and affect long term survival of the more complex species but not the simple ones.

This makes calculating time to divergence based on the accumulation of neutral mutations as per the molecular clock because most places are not neutral and mutations occur in overlap places, leading to a quick saturation for most genes. So distance will only reflect the time it took to reach maximum diversity. For coalescence to work we need reliable sequences that are truly neutral, which is very challenging because all sorts of sequences, coding or non-coding, genic or intergenic regardless of their apparent complexity are transcribed in RNA, and have demonstrated selection and conservation. We identify constantly more unknown functions for non-coding DNA.

To hope to derive the time of divergence (synonymous with the most recent common ancestor) one must carefully choose genes evolving slowly enough not to have saturated yet. As coalescent models grossly ignore these constraints, results are meaningless and contradictory, both with themselves and fossil evidences.

While the theory nor its application by its authors is not a complete theory of evolution by any means1, it is interesting to note that coalescence calculations using slow-mutating yet neutral sequences we actually find a time of divergence of all human races of approximately 2 millions, in total accordance with the oldest fossils of Eurasian Pleistocene hominids even though the authors obviously did not intent to achieve this, but only worked from genetics.

A few words need be said about recovered Neanderthal DNA and subsequently the genomes sequenced from it: it is garbage due to the exponential rate of decay with time. It is so obviously decayed it includes many features not found in any ape, indeed incompatible with life, that have still not be corrected in the Vindija Cave’s reference genome to this day. What good then could be the rest of the sequence ? Very little. More exactly, matchs made with modern sequences are still informative (because decay can not by chance create a given complex sequence) but not finding a match on the other hand, proves anything.

Additionally, there is no archaic sapiens DNA with which to compare Neandertals, not a single one, because African heat and aridity induce an exponentially worse degradation save for very special conditions such as clay ice or acidic peat bogs. We only ever compare archaic Neanderthal DNA to modern day humans, never to equally ancient African DNA, which does not exist. If we were to find a sufficient quantity and of equivalent quality of African DNA from an equivalent period, say 44 000 BP, it would appear just as different from us, just as incompatible with any living population as Neandertals do, due to the exponential decay rate of DNA that would have mutated it beyond recognition.

Conclusion: Anyone arguing, despite all this, that we did not descend Neandertal because its DNA has nothing in common with ours, should never have gone to school or taught writing. Sadly, this describes the majority of geneticists and paleontologists in that area… including most whose works we just refered to.

Antiracist conspiracy

There are two types of incongruities in the archaic human genomes sequenced so far. The first result from grossly underestimated decay artefacts which we talked about already, and likely scattered around the whole genome far, ruling any hope of retrieving any useful phylogenetic information.

In the case of mitogenomes sequences, because of the faster rate of mutation no haplotype can remain unchanged for more than 15 000 years. We would not expect archaic mitochondrial DNA to look any more ancestral, so indeed they do not. A different team from Stockholm University performed an extraction in isolation, on another specimen, from the Mezmaiskaya Neanderthal was very similar to the Feldhofer Neanderthal (involving Svante Pääbo) and has subsequently been shown to be similar to the Vindija Neanderthal. Therefore, it can be concluded with a high degree of confidence that Neanderthal DNA has been recovered and that this is not some kind of peculiar contamination. There is so far no reason to suspect foul play here, decay alone suffices—though in we case we find African mtDNA of a similar age (70-40 000 BP), this might warrant reasessing.

Nuclear DNA is another story altogether. Using available data, fixed differences were confirmed in all three high-coverage archaic genomes available at the time, the Altai Neandertal (Siberia), Vindija Neandertal (Croatia) and Denisovan (Siberia). These differences are ancestral in all three archaic samples and derived (fixed) in all modern humans. For each gene in the list the ancestral allele was identified in the following samples2.

If decay may produce identical markers given identical genomic contexts, I can not imagine how it could generate haplotypes identical to that of chimpanzees. There remains only a single conclusion compatible with the anthropological facts (excluding a replacement of population), however unsettling. If someone were to add a small amount of artificially damaged (ancient-like) DNA to a genuine ancient DNA sample-perhaps to introduce misleading signals or support a desired hypothesis-could this be detected?

Most aDNA authentication methods are designed to detect large-scale contamination (e.g., modern human DNA in ancient human samples). These methods rely on: damage patterns (e.g., C→T at ends), fragment length, endogenous DNA content (proportion of DNA from the target organism), population genetics consistency.

If only a small fraction of the DNA is manufactured to look ancient (with appropriate damage and fragmentation), it could, in theory, be much harder to detect. Low-level contamination is often masked by the natural variability and noise in aDNA datasets, and it is very possible to recreate the damage pattern, especially if we access to the untempered sample. If the contaminant DNA introduces alleles or haplotypes that are plausible for the sample’s context, it may not raise suspicion, in particular if your team (that of Svante Pääbo and David Reich) has been single-handledly fabricated the whole academic discourse over archaic humans for decades. Once you control the fossils, you have both the credentials and the means to create extremely convincing, hard data consistent among samples to support your agenda.

The best defense are rigorous negative controls, ppen data and transparent reporting, ongoing development of more sensitive detection methods, but most importantly multiple independent extracts. Which do not exist, as all published high-coverage Vindija Neandertal genome sequencing and analysis have been conducted by a single team led by the Max Planck Institute for Evolutionary Anthropology. It applies to every single archaic human assemblages (Neandertal and Denisova), and early modern humans too.

We established the impossibility for Africans to win over physically, cognitively and technologically superior more numerous natives in their own turf without the help of genetics, using only common sense and basic knowledge of modern races, ecology, climatology and paleontology.

If Jews and their Goyim slaves could pull off near-flawlessly the 9/11 false-flag operation without any leakage of information or turn-coats despite the hundreds of millions of dollars, professionals involved of all sorts and enormous logistic efforts involved, then handling about ten bags of bones and at most a the few dozens scientists handling fossils directly and likely all sorted out on basis of their favorable ideological biases, would be a walk in the park: the kind of conspiracy the world might never uncover directly.

And the size of that lie should not deter us from considering it: it stopped meaning anything when the Holocaust became public religion after World-War II. There is just nothing out of reach for the singular people that runs the West. As we will see, documented precedents do exist in recent memories of intentionally falsifying Neandertal fossils to promote the “modern human” supremacist agenda.

So, however far-fetched and far-reaching that may be, we must conclude at least nuclear DNA from archaic hominids have been tampered with since at least the 1990s, to serve a vile immigrationist, antiracist agenda.

Cooking and neoteny

Multiregional origins

Paleontological remains indicate a morphological and cultural continuity between Neanderthals, Upper Paleolithic and Neolithic modern men in Europe and the Levant. We also see a lesser progression from robust to “gracile” types in Africa, although change in cranial capacity in this case goes slightly upward and correlates with mixing.

In Europe the type immediately succeeding Neanderthals is the the “Cro-Magnon”, that used to be hailed as a milestone in the transition to the “modern” man: in reality it is in reality to Eurasia, with no presence in Africa. Which is to say, man in Africa and Australia is not “modern” at all: as we will see its cranium changed very little, if at all in certain regions it has barely changed since the start of the species 300 000 years ago.

On the other hand, judging from the oldest cro-magnons’ DNA, Scandinavians (Nordic Europeans) appear to be the oldest group and in continuity with the “early modern men” in Europe. Once we recognize a recent African origin is impossible, due to the preservation of so many recessive ancestral traits, we must conclude from their genetic and morphological similarities with Upper Paleolithic “Cro-Magnon” that Nordic people come from an unbroken line of European hominids dating back hundreds of thousands of years.

The typical “primitive” skull shape carried on in Neandertals and to a lesser degree later Europeans whose typical Neanderthal skull shape is regularly found in some individuals (in the form of the “hemi-bump” among other things), to the point that assuming a simple mixture between 3 to 5% as one can read is utterly laughable. Fair skin, eyes, hairs, a very high cranial capacity and elements of a more developped cranial superstructure can be found [in Scandinavia, which possess brains equaling the biggest ever found in ancient hominids. The Nordic race, adapted to the arctic conditions of life, is the Neanderthal man, who did not undergo any substantial mixing, as indicated by the conservation of many recessive traits. The change is explained nearly entirely by the degeneration we brought on us by ourselves, as we will see.

Most Europeand and East-Asians today age and die before reaching their maximum brain size. But the healthiest people do reach astounding volumes fully in line with their ancestors. Due to the variations in methods of measurement figures can get all over the place and high intra-population variation, however in 2020 Chinese adults averaged at 1510 cc, while old Finns between 60 and 77 years reach an average of 1572.9 cc, with maximums above 2000 c.

We can outline here the most likely scenario of our evolution history. Which African species went to Eurasia and when is a point of contention. But whichever it was, was allegedly the first to leave Africa either through the Levant or across the Mediterranean during periods of lowered sea levels and then gave rise to Neanderthals. The first pre-human fossil found in Europe dates of 7.2 millions years but we are too much alike for such an old splitting event.

The first skull complete enough to measure date in Europe dates to 850 000 BP with a size between 1000-1.150 cc. It is described as having mixed modern-archaic traits such as an elongated braincase, flat face and a “reduced” brow ridge. However its age (at the moment of death) is estimated at 10-11.5 while other specimen of the same group show “marked double brow ridge”, so we can safely deduce the specimen (ATD6-69) was simply young. This and later specimens firmly register in the wider Neandertal lineage.

FossilAge (years ago)
Dmanisi Skulls 1–51.85–1.77 MyBetween African ergaster and later Eurasian Erectus. 546–775 cc.
Grăunceanu>1.95 MyCut marks on bones
ATE7-11.4–1.1 MyHomo erectus – Partial upper jaw and cheekbone
Barranco León1.4 MyHomo sp. – Single tooth
Kocabaş1.1–1.3 MyHomo erectus – Partial skull vault. Estimated ≈1,100 cc based on curvature
ATE9-1≈1.2 MyHomo antecessor – Lower jaw fragment

The precursors of Homo antecessor and Homo heidelbergensis likely left Africa in multiple waves, with key timelines including an initial initial expansion at least 1.95 My ago. So… we can conclude, at minima the last common ancestors of Asians and Europeans after the split from Africans was 1.75 MY ago, likely 2, and their skulls ranged from 500 to 800 at the time, compared to other Ergaster in Africa.

It is actually not rare for Species 2 million years distant to interbreed. For instance the American and European bisons (1.7 and 0.85 MY) and false killer whale and common bottle nose dolphin (likely more than one million). And regards to genetic distance, hybrid sterility or morphological differences time matters less than mutation rates and where they strike: some features, obvious to the naked eye or not, weight more heavily on reproductive isolation than others, causing either anatomical, genetic, or behavioral incompatibilities.

Three racial types

Brain size evolution was not a linear process and there has been significant variation within species, until the late Pleistocene (126 to 40 000 uears ago ky). The evolution of intelligence itself is more difficult to ascertain. Very small-brained people, like Anatole France (≅ 941 cc) can function well in our society, so if we consider that natural selection ensures animals always maximize the use of their hardware (though complexity may have evolved too, beside size) in the environment they evolve for, the current European norm might have been the norm for Homo Heidelbergensis as soon as he reached African sizes of cranium, 1250 cc, or less.

Simple darwinian evolution does not explain anything, which is to say we are much smarter than we ought to be for mere survival: apes do as well as us even in near-polar climatic conditions. The selective pressure necessary to impulse the growth of such a costly organ as the brain in the two encephalization event since the Out-Of-Africa split (arguably, even before), is lacking. We believe the extra brain matter above, in so far as it corresponds to more neurons, relates more to extrasensory capacity rather than pure cognitive intellect3. When to set that threshold, we do not know, but we do know around 300 000 BC encephalization accelerated in Europe and stopped in Africa.

Morphological analysis (as opposed to unreliable genetic analysis) shows that European population underwent a loss of diversity coincident with the establishment all over the continent of classical Neandertals. It would explain the withering out of ancient, inferior lineages with a much lower brain size that did not undergo the second wave of encephalization.

Autodomestication through cooking

Neandertal morphology turned into to the modern European because cooking made our lifespan dropped from a millenia to a measly 120 years. The skeletal robustness that characterized Neandertals was shared by all previous species, even Early Homo Sapiens had a thicker femus cross sectio.

Mrs. Marie Cachet concludes that we are 99% Neanderthals, and that the degeneration into progressively “modern” humans has been caused by first hybridization with Africans, and secondly auto­domestication in the context of the Neolithic revolution and agriculture. Our thesis supersedes and updates hers but her website’s extensive archeological and anatomical evidence is worth consulting. The explanation of the cause of the degeneration however founders completely.

A time of extreme cold allegedly pushed European populations, reduced to very small numbers, to the brink of extinction and forced them to move to the Middle East and a more amenable climate. She supposes that we met Africans there and interbred ever so slightly though enough to curse the whole species forever, kickstarting a radical reduction in women’s pelvises’ size which spurted the shrinking of the brain (children with bigger brains not surviving delivery, along with the mother), a downward spiral compounded our ongoing auto domestication due to sedentarization and civilization. Domestication very often involves the selection of infantile traits, such as barking in dogs, which in wolves disappears in adulthood. This would explain the degradation of the other characteristic traits beside the brain, such as thinner bones, articulations and gracile cranial superstructure:

But hybridization is incompatible with evidences:

Hybridization is nothing more than an improvable deus ex-machina. The effects of domestication on the other hand, are easily demonstrable: wolves are bigger, stronger and smarter than dogs.Other domesticated animals show the same tendency. Domestication directly increase the apparent neoteny in animals: it reduces their health, weakens their bone structure and maintains baby-like characteristics and behaviors, or in the traditional sense, make juveniles capable of breeding, as it does now to ever younger girls. Autodomestication refers to the same process, but done to ourselves by ourselves in the context of a sedentary life and social norms increasingly alien to our natural instincts, making social thus reproductive success ever more dependent on immature, tamed characteristics.

Yet this seducing story does not add up with the facts either, not alone at least, because most of the change had occured 20 000 BC already, before sedentarisation and agriculture, thus autodomestication, ever started. Europe kept a hunter-gather lifestyle until 9-8000 years ago. We explain this transition by the mutagenic effect of cooking, which mirrors the effect of domestication. Domestication directly increase the apparent neoteny in animals: it reduces their lifespan, weakens their bone structure and maintains baby-like characteristics and behaviors, or in the traditional sense, make juveniles capable of breeding, as it does now to ever younger girls.

This might have always been a hidden yet crucial factor behind animal husbandry, introducing new abbherent mutations at a higher rate, on which then selection or genetic drift can act upon. In the case of Neandertals however, there is yet much more to what cooking has done.

Dentist Dr. Jack Cuozzo, a Christian literalist, by studying the fossils and applying his professional expertise, revealed many anomalies not congruent with the interpretations of paleontologists, especially regard to the growth rate and supposed age of the fossils. It has also revealed false reconstructions that no one with a basic knowledge of anatomy could have done in good conscience… revealed alterations and damage done to the fossils themselves with the obvious aim of forcing fossils to submit to their desires and to depict Neandertals as inferior. The fanatics of the theory of evolution, who like all fanatics react with violence and denial to any evidence of errors of their doctrines. These are serious but well documented accusations.

Dental and bone morphology indicate either a strong precocity and speed of growth even superior to that of apes… a much, much longer growth period than ours. We could give the example of a morphologically very young crane, but the jaw was well developed (indicating weaning), and the milk teeth showed signs of extensive wear which did not correspond to the estimated age. Problem is tooth wear is very much independent of lifestyle and over whole populations reflects only the passing of time. No amount of chewing, along the whole range of historical human diets, can explain the excessive wearing shown, leading to the invocation of ridiculous uses of teeth as tools

He noticed the excessive amount of wear on the first primary (baby) molars as compared to the second primary (baby) molars. This suggested a more protracted time between the eruption of these teeth than found in today’s children. Today’s children have their first and second primary molars erupt about 9 months to one year apart. These two teeth in the Engis child look like they were separated by a much longer time frame than that. This is what protracted eruption means: more years between tooth eruption.

Their primary (deciduous) teeth were bigger and much sturdier than our definitive tooth, sharing the same taurodontic structure4.

In all cases, wear is attributed to unknown lifestyle agents yet undiscovered in current populations, however primitive. In his words: “Age or ape !”. Cuozzo’s second discovery was the typical Neandertal morphology not being a simian trait we would have lost from the apes that prehistoric man preserved, nor the result of a very rapid growth, but the result of a multi-century growth instead. In other words: Protracted growth or unbelievably accelerated maturation in the same amount of time as us. Neanderthals represented men before the Flood, living up to a thousand years.

We continue to grow after puberty, though in a slowed down fashion (in absence of disease or deficiency). Bone mineralization continues in particular on long bones (limbs) and the face, causing an elongation of the face. We are all born with a very small face in relation to the head, a ratio increasing with age from some level of retrognathism (pushed backwards) in babies and fetuses to the flat face of maturity, to a forward extension on older people. Models, when projecting normal growth after our maximum lifespan of 120 years, magically produce back Neanderthal skulls toward 400 years.

The start of cooking

There is no certitude yet as to when or where cooking started, and how quickly it became general5. Using the Bible as a reference, the transition from long lifespans to normal ones lasted 2 to 3 000 years from Adam to Abraham’s first son, and just 500 starting from Noah, the Flood, when lives started shortening, the whole amounting to at most a dozen generations, maybe extending to a few more thousands if men sire sons very old. From our point of view, the disparition of “archaic” traist must have been nearly instantaneous, not gradual.

Here is your table with Robusticity, Brow Ridges, Face, and Jaw & Teeth fused into a single column for each specimen:

Vindija 33.16Oase 2Mladeč 1
Alleged speciesNeandertalE.M.HE.M.H
Dating (BP)≈40–38 ky≈40–38 ky≈35 ky
RobusticityBones slightly more gracile than earlier Neandertals; heavy but less continuous brow ridges; large, projecting midface; little chin, slight gapStill more gracile; reduced but moderate brow ridges; flatter, shorter face with higher forehead; intermediate chin, no gapStill more gracile; near-modern brow ridges; modern forehead, chin and jaw
Cranial Capacity (cc)≈1,400–1,500≈1,500–1,600≈1,500–1,550
Braincase shapeLow, elongated but slightly roundedGlobular with archaic featuresFully modern

We can expect the traits before subject to long, continuous growth or reshaping (elongated versus globular shape, suprorbital torus and the length of the face or conversely, of the forehead) to be the ones to modernize first. Juxtaposing the last Neanderthals and first Cro-Magnons seems to validate those assumptions. And this comparison also matches a morphologic succession no more than a few thousand years in the making, if not instant (because of the lack of fossils). The last specimens of the classical type (Saint-Césaire) on record dates to 40 000, and disappeared after 35 000.

The Neanderthal type in Northwest Europe disappeared earlier than previously thought, overlapping briefly with more degenerated types Cro-Magnons in other region.

Cooked groups in various stages of degeneration coexisted with raw, long-living groups for at least a couple thousand years, since cooking did not spread instantly all across the planet. No study so far demonstrated the coexistence on the same sites of Cro-Magnons / modern men and Neandertals, but they definitely shared the same continent.

If we assume long lives came primarily from the extrasensory and dolicocephalic growth is programmed genetically, then a population would display from the very first generations to have fallen to cooking, skeletons as robust as their forebears but with a skull shape immediately closer to that of modern Europeans, because accelerated ageing closed their cranial sutures before much cranial growth had time to occur. Indeed the first “modern men” in Europe had brains just as big as the average Neandertal, very close skeletal structure, but a rounder skull.

Apparences and reconstructions

Those extraordinary eyebrow arches were not that common in Neanderthals, perhaps 3 or 4 fossils in all. The truth is, at an equivalent level of maturity (in proportion to their total lifespan) they retained more childlike and gentler features than we do, without sacrificing strength and robustness as aesthetics and beauty, both inner and outer, were major elements of their instincts, as they are of ours. Children looked like… children, baby-like even, keeping fetal traits for years. Conversely the reason behind those bigger suborbital ridges liesis that they used to die much, much older. To confound the average age of death (for conserved bodies, implying a particular care in burying) with the mean age in a population sounds pretty ridiculous, yet most specialists make that mistake.

Recent reconstruction
Una nuova approx­ssimazione facciale per l’Uromo de la Chapelle-aux-Saints 1 (2023)
reconstruction
Realistic representation of Neanderthals

Several details have often been exaggerated or flatly falsified (and the fossils tampered with), such as this forward projection to give it a simian look and the absence of a chin. For two hundred years, reconstructions did not stop making him look primitive and brutal. From the start centuries ago the first archaeologists being Christian priests destroyed many relics and finds, for fear of turning the official biblical story upside down. How can we place such men, similar but better in every way, than current humanity saved by Jesus?

But the most recent and accurate reconstructions have now dropped the act, and finally show the truth of our origins, even though no one seems to have noticed. If figures need to be cast, assuming a rough proportionality (certainly wrong), a Neandertals in the third of his life, which would correspond to 30 years in our country. What we preserve by lying and hiding artifacts, is the culture the victors of WW2 cemented, with its antiracism and egalitarianism. The truth of our origin would prove Nazis and Nordic supremacists right over night in the eyes of millions if not tens of millions.

The explanation is a multi-regional origin of the man, or more exactly of the three root races: Whites, Blacks (including Congoides and Capoids/Bushmen/San), Asian/Mongoloid. The genetic aspect of the argument will be touched upon below. This classification as old as it is, is easy to apply, and recent (dependable, unlike when dealing with degraded 30,000 years remains) population genetics validate it.

It is easier to refer to craniometric measurements than to genetics, because such visible traits depend on a large number of genes, difficult to isolate as well as to quantify… while their expression shows on the mirror. Moreover, unlike bones, close to nothing remains from DNA after more than 10 000 years, according to the Arrhenius equation.

for a rather accurate comparison
images 4 primitive skulls
Neanderthals-sapiens comparison

There is no question that humans do originate from Africa but much earlier, leaving the continent along coastlines. Then several groups splitted giving the homo antecessor, from which the Homo Sapiens or Negro evolved, and homo heidelbergensis, probable common ancestor of Neandertals man. Other branches of homo erectus appeared and disappeared, like these two dwarf species (no more than 1.50 m) with a reduced brain, Homo floresiensis and Homo Luzonensis.

Insular dwarfism aside, all branches then underwent a continuous process of encephalization, a larger brain size. But some more than others. Thus the species in Asia and Europe have developed a brain reaching a ceiling of 1700 cm³.

Muscular attachments

Musculo-tendinous attachments form markings on bones indicating the level of stress muscles exerted throughout the individual’s life on the the corresponding bones. Analyses all conclude that Neandertals’ attachments indicate a strength, pound for pound, similar to chimpanzees, with 1.3 to 2 times the upper-body strength.

Men and women were equally much sturdier, to the point that for decades it was not possible to tell apart one gender from the other without pelvic bones remaining. Their women would have folded the biggest of our MMA fighters, for half their weight, while men would throw them around like rag dolls. The reason for this difference in specific strength (so relative to body-weight) in the case of apes is not that clear. Explanations usually revolve around a different anatomy favoring more upper-body strength, longer and bigger muscle fibers we supposedly traded for many smaller ones and fine muscle control.

diagram_fiber
chimp_naked2

Humans share with sloths the palm of having the most relative amount of slow-twitch type I fibers, yet while bearing less muscles for their size compared to other mammals and obviously not requiring much explosive power, sloth arms are still twice as strong as ours for the same mass. So to conclude that Neanderthals must have had explosive strength but not endurance for instance (or similar lines of thought) is based on nothing but gross prejudices.

Not this
predator
But this !
guts

Nearly all studies comparing humans to animals follow the inescapable injunction to minimize and justify human anomalies, such as our physical weakness, the abundance of female menstruation, cancers, mental diseases, etc. Our inability to consciously use more than 20 or 30% of our muscle power is justified with our brain leaving no sugar for muscles, while weakness is explained by a trading explosiveness for endurance. However it is patently wrong that chimpanzees tire easily: on the contrary their arms strain for a considerable time while moving from branch to branch, sustaining a body weight for extended periods. We certainly evolved to run more efficiently and longer, but that is it.

Despite their indubitable ape-like strength we know from wall paintings considered by expert artists to rival in expressiveness the Sixtine chapelle, that their authors did not lack in the fine motor control department. So we may have developed smaller fibers than chimpanzees allowing for selective activation and finer movements, but to deduce a loss of explosivity in general makes no sense, as you can produce the same strength by recruting many smaller units at once. Logically cortical inhibition can not be evolutionarily adaptive: there is no function in keeping around dead-weight. We would keep the right amount of muscles needed for our ecology at any time, by accessing 100% of our power on command, and growing additional mass exceptionally.

We rather explain cortical inhibition as an unconscious failsafe to stop ourselves from using 100% of our strength and rip our muscles, sinews and tendons asunder: animals do not care, because of their body’s better protein composition can handle their strength alright.

This, plus the higher mechanical advantage of the longer muscular attachments we had, accounts well for the two times increase in overall strength. To imagine a hulking mountain of muscles or fat as a BMI of 27 (closer to gorillas or space marines than actual humans) would suggest has no justification to it.

Studies show that thicker bones and a much more power can not be explained entirely by lifestyle requirements or training, as early modern humans (all but one Cromagnons). So their thinning must have been an effect of degeneration. The same must be said for the bigger joints, brain size, women’s pelvises, etc. Neanderthals must have had a similar physique as a wild chimpanzee, just not as top heavy. Thanks to a specific strength twice ours, twice the same power could pass through the same articulation with the same muscle mass. Hence their real BMI must have been closer to Bruce Lee, which weighed at his prime 65.9 kg of pure muscles for a height of 173 cm, for a BMI of no more than 22.5.

Neandertals averaged between 164 and 168 for men and resp. 152 to 156 cm for women which is the height all children default to when fed with raw food (excluding milk !), regardless of familial background. It has also been the norm for most of human history, before and after agriculture. The fact Scandinavians react to dairy the way they do is nothing more accidental and irrelevant, because the element that triggers their excess height does not exist in nature.

Lifespan and ontogeny

Passage of time or genetic programming

Where we partly disagree with Cuozzo, is in the understanding of bone thickenings such as the suborbital bulges (brow ridges). Those characteristics are no structural compensation of the mechanical constraints exerted by jaw muscles over time. Some Europeans still show absolutely Neanderthal-like strong facial features now and then, in particular in Scandinavia, despite a standard ageing. And the biggest torus by far come from Australian Aborigines.

This trait is in reality common in nearly all early hominins and has been genetically programmed for at least 46 million years (divergence between Old World and New World primates). Animal studies proved the absence of a mechanical function, and the torus supraorbitalis’ composition varies greatly from person to person, some full some hollow or spongy, famously becoming “almost paper-thin” in Neanderthals.

Projections using current growth charts indicate that if We could live longer now without dying or aging without degenerative conditions typical of aging, then our skull would adopt the typical Neanderthal shape in about 400 years without cooking. It does not mean that with a better genetic program we could somehow go on forever: we must separate the effects of aging that are determined genetically as part of a developmental program, and the effects that come from entropy and the body losing functionality, before considering what part cooking plays in aging now.

The robusticity of lower humankinds (Africans and Australian Aboriginals too is ancestral, inherited from apes. It is the weakness of very modern populations that is apomorphic, derived traits. Our features develop with the passage of time but how far or how they do is not itself, a function of time: the way we will look is determined mostly by genetics, without any implication on lifespan, metabolism or general ageing. Normally (cooking notwithstanding) this is first and foremost a matter of genetics, with variations inside and outside ethnic groups.

Facial features are the result of genetic programmation of the faster or lower rates of growth and thickening of a multitude of facial bones. But because of those rates, one race will take a very, very long time—centuries—to arrive to a certain stage of development, while other races will visually mature much faster, which is to say run through its lifecycle, live grow and die faster. Or they might just not develop those traits at all. Some races, like Pygmees and Bushmen, do seem naturally distinctly more neotenic on a genetic level than others, without our involved history of degeneration. Our neoteny on the other hand is accidental—the consequence of a long life cut short—since fossils show we changed and they did not.

What cements this theory is the combination of the sheer impossibility of the mainstream theory, along with the assortiment of characteristics on European accounted for by both very slow maturation and the passing of a very long time has passed. However unconventional (even if lowkey supernatural) a simple single assumption obviating the need for multiple incoherent ones does conform more with the Occam’s Razor principle.

Implications of the possibility of longer lifespans

Once we acknowledge extended lifespans, we may rightfully wonder how exactly this was possible, how and when it evolved and devolved. Firstly we must assume this trait was not ancestral to the small-brained apes crawling out of Africa 2 million years ago. Nor do fricans do not show any sign of anatomical change in the past three hundreds millenia indicating a continent-spanning dwindling of lifespan. Nor do we see any abnomalies in fossil records for any hominid before classical Neandertals and related Asian species.

This narrows down the change to 800 000 years. If we follow that logic European skulls started noticeably exceeding Africans 430 000 years ago with the Sima de los Huesos population, showing early signs of Neandertal traits, at about the same time African encephalization stopped.

The only case of “related” species" evolving a considerably longer lifespan than cognate species in relatively short time (going from a factor one to ten) are the mole-rats, living 37 years, which diverged from mice (from 1 to 6 years) 30-40 millions years ago and it shows in their vastly dissimilar and both their looks, morphology, ecology. These adapations are thought to have taken millions of years. Their genetic distance from mice, about 15%, is what separates us from pigs and cats. While we are 99% identical to Africans or any previous species in the Homo lineage of the last million year and shared pretty much the same diet, climatic range and lifestyle, with a near identical physiology and anatomy.

Outside mammals, the only similar case of very closely related species happens with the rockfish genus (Sebastes spp). Rockfish originated around 15 million years ago with a long-lived ancestor (ove 200 years), some lineages degenerating to a mere 11 years in under 1 million years6. But they share very little commonalities with our case7, as their physiology is simpler and we know their longevity took a long, natural time to evolve.

In our case, we developped it almost overnight, to loose just as suddenly, and without anything change to our lifestyle… except our diet. We lost the longevity not through the supression of a purifying pressure lifting the lid on mutations (not mostly anyway) like rockfish, but through the addition of a gigantic source of new mutations with cooking.

We can only rule out an explanation hinging on a more efficient metabolism conjoint with better DNA repair mechanisms, allowing for a slower accumulation of cellular damage over time. How much energy we need to burn to achieve a certain (natural) lifestyle depend mostly on physics and since our bodies changed (genetically or phenotypically) very minimally changed since apes, such exploit, “cracking the code” so to speak, would be biologically impossible beside having taken a ridiculous small amount of time despite living in the same climates and having a near identical anatomy, diet and physiology as our ancestors and African cousins.

Another counter-argument, is the surprisingly common maximum lifespan of around 120 years for all races, no matter the purity or admixture. How unlikely is it likely convergent evolution to reach three times in a row a different maximums (or even more improbably, the same), then degenerate, presumably each at a different rate and time, only to fall on the same maximum ?

Because degeneration is chaotic, for the same lifespan to arise there must be common, organic constraints, which we believe is simply the biological maximum lifespan for the homo genus. It means, that while our body did get weaker even from birth, possibly due to mutations in DNA repair genes, the biochemical or physiological differences between any modern race (or chimpanzee) or with Neanderthals, is negligeable. A sudden ten times increase in longevity in such context can not come down to a few genes.

Spontaneous mutations and genetic programming theories like disposable soma8 offer distinct but overlapping explanations for aging. Despite their accumulation with age being well documented, recent empirical studies challenge the direct role of spontaneous mutations in ageing. Precisely, experiments in Drosophila reveal most spontaneous mutations have consistent effects across a lifespan rather than causing escalating harm, and fail to drive the progressive cellular dysfunction typical of ageing. Similarly, human DNA sequencing studies find no clear link between somatic mutation burden and aging phenotypes, suggesting other factors dominate.

So spontaneous mutations alone show limited explanatory power, whereas genetic regulation of bioenergetics and stress responses emerges as a central driver. Evolutionary models like disposable soma explain lifespan variation across species, but findings observed across species do suggest aging is actively regulated through conserved pathways controlling energy metabolism and cell proliferation. On the other hand, cooking induces a much higher mutation rate than normal, causes a parallel, paradoxical kind of ageing that stacks on top of natural processes.9

Mutation buildup alone might not be a problem even after centuries of living, if we stopped cooking. Research today points more to epigenetic problems as the key cause of aging. Animals typically extend their life either by slowing down their metabolism and through better DNA maintenance, but such change take a time beside stumbling over obvious ecological roadblocks: if a starving mouse lives twice as long, in the wild its halved activity level would kill it in a quarter the time.

We argue that this “epigenetic” health merely reflects the level of metapsychic entropy in the body, which a spiritual principle can directly address. Aging might be shaped by both physical damage and a deeper organizing principle: by both our lifeforce and the ability of the body to maintain it. If true likely the current fad about epigenetic anti-ageing methods will not amount to much, even if they worked as intended. Indeed, human trials are sparse and existing studies unsurprisingly show modest effects.

Energy creates order and information as its very function, so even if we could reinstate functionality locally in the body through external means, metapsychic decay would cause cells and organs to revert to aged states or develop new dysregulations, much like the punishment of the Danaids in mythology10. Neandertals had both an adequately robust body, inherited from apes, and a much stronger soul.

Neandertals and cooking

Dental calculus in a number of individuals of El-Sidron (47,300-50,600 BP) showed traces presence of aromatic substances produced during heating in a number of food sources. Researchers unambiguously state that there is no definite evidence and all “the relative abundances of these combustion markers are entirely consistent with those found in wood smoke” (unsurprising for a cave-dwelling caveman) and that “there were no diagnostic protein markers or steroidal compounds indicative of meat ingestion”. Yet through the article they can not let go of the marked preference for the cooking hypothesis.

One reason is the persistence for describing “starchy food” remains in the dental calculus, which they identify as probable seeds, hence why their minds must be aching to see an analogue to our cereals, even though it includes all kinds of edible seeds as well as sweet potatoes and beans. Truth is researchers are every bit as prejudiced and close-minded as the wider public, if not more.

Not being able to eat any of these raw in any appreciable quantity anymore, this article and a number of subsequent ones now take cooking as an established fact. Several other times, charred food remains were found on site, which is consistent with simply dumping one’s (abundant) food remains in the fire to feed it. All in all there is no evidence debunking an non-adaptive irruption of cooking in our lifestyle, with the consequences that we know.

Modern strategies to downplay brain size differences

Several strategies are employed to give the impression that Neandertals were either not as smart as modern people, either because the bigger brains meant nothing, because their skulls were emptier so their brains no bigger actually. These “corrections” all result in a lower encephalization quotient, as we will see without any justification, based on unproven assumptions as always. The only real motive is to make them look stupid. They “correct” for differences in body weight, which they completely make up (adding a good 20 more kg than a human of that size), and by supposing a proportionality between eye sockets and the visual processing center of the brain.

Here is your original text with the refined paragraph integrated for smoother flow:

To sum up their arguments:

To imagine the skull was more empty because of the shape or their brain literally smoother is just ridiculous. We evolved for this shape, all hominids have. Why would the brain not squeeze itself to fit the whole skull, no less than it does now? When you do not know, do not assume the worst for the sole that it makes you feel good. This is just slander.

An alternative hypothesis states that brain size positively correlate to eye sizes, thus the bigger eye sockets (not biggers eyes mind you) of Neanderthals would explain their bigger brains without implying a higher intelligence, most of that additional mass being dedicated to sensory processing. That correlation checks out, but the conclusion does not: if we assume reasonably proportionality (twice the visual information requiring twice the processing power and gray matter), then the increase should logically concern centers dedicated to visual information processing only. But instead the correlation fits well with overall brain size.

The results showed that the biggest brains, averaging 1,484 millilitres, were from Scandinavia, while the smallest brains, around 1,200 millilitres, came from Micronesia. Average eye socket size was 27 millilitres in Scandinavia and 22 millilitres in Micronesia.

Latitudinal variation in light levels drives human visual system size, supplementary materials

If the theory held true, the increase should be 23% of the 20% of the brain doing visual processing, which would mean 56 more cubic centimeters, not 300. If that volume had no correlate in higher brain functions, our eyes would need to be not 23% bigger than those of Micronesians but 559%—which is to say 6.59 times bigger than theirs. Instead, the correlation likely reflects a general allometric scaling effect: populations or species with larger brains overall also tend to have proportionally larger sensory cortices, including visual areas, rather than vision being the primary driver of brain enlargement. That explains why the correlation also exists not only with bigger brains, but with a higher intelligence too. In other words, that argument proves exactly nothing, except that species native to cold regions invariably evolve higher intelligence. Regardless, there is not correlation between eye sockets and eye size in Neanderthals so the question is moot.

The other argument main argument to deny the importance of a bigger skull is the belief, arguing Neandertals were more massive than us for the same height and the bigger brain served to control the body. The motives for such an idea (and lie) are two fold: Among related species, differences in brain size have been correlated to differences in body mass: the more mass, the more information to process. Leading to the false assumption that relative (to the body) brain size matters more for intelligence than absolute size, although evidences are slim and scientific opinions mellowed out this last decade: It is believed now that most of the brain does not scale in size with body mass linearly, as most bodily processes occur identically for a 1.4m tall Pygmee or a 1.9 meter-high Icelandic strongman. But it does not matter, Neandertals were not at all heavier. Their BMI was no different from ours.

Meat and isotopic profiles

The prevailing opinion believes Neandertals were hyper-carnivores, which is to say that they spent their time hunting mammoths and other incredibly dangerous big game, in order to eat more meat relative their weight than hyenas do, even though they already only meat, while we found traces of fruits, plants and sea food in Paleolithic leftover assortments. This resulted from the use of the isotopic analysis method for Zinc, nitrogen to determine diet, is to determine the isotopic proportion of certain atoms (zinc, strontium, carbon and nitrogen) because heavier isotopes statistically take less part in molecular exchanges associated with bone turnover (for metals) and cellular respiration (for carbon).

Determining the trophic position based on δ15N rates has been proved criticized due to its correlation with multiple factor, in particular such as size and development rates and developmental time with δ15N enrichment, suggesting longevity-related life history parameters influence isotopic signatures independent of diet or basal metabolism. Studies on molecules themselves, show an enrichment in heavy isotopes for older proteins, consistent with metabolic processing over time.

Development rates influence the accumulation or increase of heavy isotopes (such as δ15N or δ13C) in biological tissues due to the kinetic and metabolic processes tied to growth speed. The isotopic composition of tissues, especially for heavier isotopes, increases more noticeably with slower development because of longer metabolic discrimination and fractionation processes, which favor heavier isotope accumulation. Species with the same diet and metabolic rate but longer lifespans exhibit higher δ15N values due to cumulative isotopic fractionation during protein metabolism and nitrogen cycling.

In catabolic reactions lighter nitrogen-14 is preferentially excreted, and heavier nitrogen-15 accumulates progressively in tissues, a process caused by kinetic isotope effects. In invertebrates, growth rates influence nitrogen isotope incorporation, with slower growth causing slower isotopic turnover and thereby allowing more cumulative heavy isotope enrichment in tissues. The effect of a slow, protracted growth seems confirmed from multiple angles. The key seems to lie in the catabolic rate of amino-acids: lower turn-overs and lower oxydative rate lead to heavier isotopes accumulation in tissues. This hold true on a molecular level as well: longer-lived proteins show an isotopic profile skewed toward higher isotopes even with controlled diet.

Surprisingly the relationship between longevity and abondance in heavier isotopes might be going a little bit both ways for a very long time: studies in yeasts show ageing expresses itself, despite no morphological indication, in a decline of heavier isotopes while a diet enriched in them drastically increase lifespan: more exactly the number of times a cell can regenerate before dying. Hence that particular isotopic profile might be both a consequence of longevity ,indicative of a different, adaptive cellular metabolism, as well as a mechanical consequence of it.

In summary, elevated human-like δ15N values in Neanderthals can be logically considered an intrinsic biomarker of longevity. Identifying diet in early hominins has been difficult because of the diagenic loss of organic matter in collagens older than 200,000 years. But carbon and nitrogen isotopes bound to tooth enamel in fauna from an approximately 3.5-million-year-old site that included several Australopithecus fossils, proved that hominid diet was still then mostly vegetarian, and meat, by the way, did not cause the expansion of the brain. Sadly there is a notable gap or scarcity of detailed nitrogen (and zinc) isotope datasets specifically in that intermediate 2.5 My to 120 ky window for hominids which would interest us, compared to the richer data sets available from earlier Australopithecus and later Ntyeanderthals / early Homo sapiens.

Skeletal aging markers indicating a Neanderthal died at an apparent “middle age” (around 37–50) only reflect relative physiological degeneration rather than maximum chronological age. ccumulating isotopic signatures consistent with such longevity despite a metabolism and diet comparable to early humans.

To summarize, here are their conclusions:

Isotopic ratioDifferences in Neandertals
δ15NHigher δ15N values than contemporaneous carnivores and other hominids (e.g., early modern humans): should indicate top-level carnivory, possibly reliance on large herbivores like mammoths
δ13CSimilar or slightly higher compared to herbivores; in some studies, ratio matches terrestrial C3 plants and animals. Also supports terrestrial carnivory
δ66ZnLower δ66Zn values relative to carnivores, supporting a higher trophic level hence hypercarnivory
δ68SrNot informative for trophic level but confirms local origin and local dietary sources

As the animal die and is buried exchange with the environment cease and radioactive isotopes decay. But if we can date the sample, we can estimate the original proportion. As it stands, opinions are nearly unanimous that Neandertals were the greatest predators to ever exist.

Yet primates are by and large vegetarians and no species can switch diet entirely without suffering drastic drawbacks, such as a reduced brain and lower activity for giant pandas. If snow monkeys could stay true to their genetic preferencess in the same climate as Sweden (Hokkaido’s temperature is 10°C per year, 8.8° for Stockholm in 2022), then by no means climate would have forced Paleolithic humans to stray from them, let alone regularly. Conceivably our main protein source could have shellfish instead of plants and insects, but neither ideas are too consistent with these profiles. Fish as main protein source does not fit well either. Only rotten meat could, but it wouldn’t explain why earlier hominids ate so drastically differently.

Isotopic analyses have been conducted specifically on Middle Pleistocene hominins dating back to around 140,000 years ago. For instance, fossils from a site in Israel, dated to approximately 140,000 to 120,000 years ago suggested varied but largely plant-based diets, with some evidence indicating they ate little to no meat. Similarly studies of fossilized teeth from South Africa dating to around 3.5 million years ago determined that early hominins like Australopithecus africanus primarily consumed a vegetarian diet with little evidence of significant meat consumption at that time.

The same analysis were performed on hominids older than classical Neandertals (anterior to 120 000 BP) as well as younger hominids (so-called “early humans” in Europe). According to these methods they all shared a mostly vegetarian diet, of the kind we would expect from any primate whatsoever. Incidentally the boundaries of hypercarnivore signal appears to coincide precisely with on one hand the likely moment Biblical lifespans became prevalent (roughly corresponding to the Last Interglacial), and on the other when they ceased, with “modern humans”. We do not think it is a coincidence.

We believe that higher lifespan increased the accumulation of heavier isotopes one way or another. For the moment this conjecture does not appear to be supported by science, on the contrary lower levels of oxydative stress and slower turnover rates should make it closer to the baseline proportion of the dietary source, so plants and marine food. In theory lower rates of both or either cellular damage or bone turnover should favor an isotopic equilibrium shift toward lighter, not heavier isotopes. But then again, there are no comparable cases in any other animal (in absence of relevant data from rockfish), so no disproof from direct measurements either.

Our conjecure has the benefits to be based on three certitudes:

  1. Profiles conform with primate expectations before and after, ruling out diagenetic explanations (transformation of remains after death)
  2. It is not possible that their diet has been so different, systematically, consistently across eighty millenia, from England to the Middle-East, when no one before or after ever even approached that.
  3. They lived very long and the measurements correspond perfectly with the certain end of that longevity (modern humains) and the likely beginning of it (big-brained Neandertals), roughly with the Last Interglacial 120ky BC.

Given this apparent correlation, and the sheer logical, ecological, behavioral (only madmen would rely on raw meat exclusively when given a choice) and metabolic impossibility of systematic hypercarnivorism outclassing even obligate carnivores like hyenas, whatever the mechanism, our theory is more likely. If true, then given its consistency for Neandertals, by tracking that “hypercarnivore” signal we obained a means to determine if a specimen or specimen had extended lives, in particular with non-European fossiles, in Africa and Asia for which good morphological expectations lack or there is just not fossiles and the relevant body parts for Cuozzo’s exacting tooth and cranial analysis might miss.

Antediluvian civilizations

White nose on sculpture
Porphyry stones in Ollantaytambo
See the figcaption

Fossils are not the only undeniable evidence in the far distant past, of more evolved cultures before recorded history. They left stunning constructions (in size and/or precision) and other out-of-place artefacts we recognize as buildings, dams, aqueducts, mine shafts or whole geo-engineered areas, with a precision achieved in hard stone challenging if not still impossible today, while other remains such as the Yangshan monument were so gigantic as to appear scooped from a side of the hill with a butter cutter wire by a gigantic hand. From the similarities found on all continents except Antarctica and the impossibilities even today to replicate many of those achievements at least without extreme difficulties or cost, we gather that a global civilization or cycle of related ones existed once spanned the whole planet, from the Equator to Crimea, Grece, Egypt, Ethiopia, the Arabian peninsula, India, China and Japan, possessing higher technology than any historical culture, including our own.

Despite the doxa abounding in unanimous yet unsubstantiated beliefs (such the “Sea People”), mainstream archeology attacks with ferocity anyone contradicting the professed version of history, resorting to lies, cheating, misrepresentation and constant dishonesty, but the material evidences can not be denied. Numerous sites throughout the world display clear horizontal, precise parallel marks typical of excavation machines and impossible to produce by hand, implying tools at least relative to modern diamond drills.

Diorite, harder than steel
Transparent diorite vase
1000 T vs 1250 T, really ?
The Thunder Stone: from 1,500 down to 400 tons – Jiri Mruzek
See the figcaption

While the origin or difficulty in replicating some sites is up for debate, and we can expect to find more in the soil of the Sahara or Amazonian forest in the future, we choose to illustrate our points with the following artefacts whose prehistorical origin we deem absolutely beyond contention:

The history of this meta-civilization is a debate in and of itself, but not the timing of their demise: A gradual diminution in technical mastery can be concluded from evaluating the sites, followed by the end of their global dominance (for all intent and purpose, their final disparition) coinciding with the asteroid impact and climatic upheavals of Biblical scale at the Younger Dryas boundary approximately 12 000 years ago.

Addressing criticisms

Many sites display these same tell-tale signs of technological anachronism, often hinting at the same exact construction methods thousands of kilometers apart. Mainstream archeology’ theoretical counter-arguments to the idea of a high technology civilization (anything more advanced than classical antiquity) can be summarized as such:

  1. An advanced civilization would require agriculture and there was none as genetics show no cultivated species.
  2. We find no cities which would be Atlantean or however we choose to call them.
  3. Technology to our level requires whole industries, and metal and machines, which we do not find.
Primitive Egyptian tools
primitive egyptian tools

But none of these criticisms stand.

  1. Firstly, agriculture as we know it is only a requirement when relying on cereal culture and husbandry. If sizable populations were needed, there are ecological ways to increase food production such as food forest gardening would leave no genetic traces. Depending on the demography and crossbreeding practiced, it is also plausible for selected plant species to rewilden and with time loose any signal of domestication. However, we argue that these cultures did not require a population concentration much higher than that of typical hunter-gatherer villages, obviating the need for agriculture.
  2. There is no dearth of advanced cities. “Atlantis” (or cities of the same or kindred people) was never lost: it is found in predynastic Egypt, many Indian temples, the rock-cut city of Petra, Peru South America. Instead of looking for unique, distinct characteristics at odds with historical styles, it appears that many styles and innovations we attribute to historical cultures have been inherited and emulated by older, more advanced civizations. What separates those sites, is the level of precision and/or size impossible for bronze age Egyptians, Chinese, Indians, etc.
  3. If machines were few and far (because constructions were unfrequent and populations small) one can argue there is no reason to find any left tens of thousands of years later. Inheritors would not be able to operate, create or maintain these tools over time, gradually losing sight of their purpose. Metal rusts, and more importantly any usable piece would be hoarded as objects of worship or prestige by more primitive people akin to the modern “cargo cults”. They would scavenge, smelt or repurpose everything as has always been done. And only a few percent of the Sahara desert or international coasts have been explored anyway. Beside, there are already artefacts found in Egypt (the gypse disk) which do look like an engrenage of sort, if we accept to consider this hypotheses.

There are good reasons why ancient known people. In could not have been much more advanced than we already know. Eventhough their reliance on slavery always stilted a wide-scale development of machinery Greeks and Romans did have mechanics, made of wood and a bit of iron, then eventually mild steel, but they never made the transition the West went through mid-XVIIIth century regards to power transmission, without which industrialization is impossible. Hence why steam power never outgrew the status of intellectual curiosity. Everything new we discover, every scrap of lost knowledge we find always fit roughly within the technical limits of the time, as assessed by what physical artefacts (buildings) and written or oral historical records they left.

Rome in particuilar demonstrated time and time again, the quasi-impossibility to work with single pieces in excess of 300 tones. From a certain size, issues of sheer physical impossibility must be considered: Only so many artisans can toil at the same place, and we would also find enormous amounts of copper or bronze or iron residue on site (and on the blocks), but we don’t. Past a certain weight the ground will cave in While wood and ropes snap, while in in one case, to cut basalt a saw of 8m diameter and no less than 4mm thick proved mandatory. Non-mechanical methods do not scale, making megalithic sculptures such as the Baalbek monoliths impossible no matter the number of people or time at your disposal.

The massive scale of what we often discussed hints not just at the antediluvian civilizatins’ incredible capabilities but at their disconcerting carelessness, in the sense that far from requiring inordinate efforts, their achievements seemed to have been easy, much more so than it would today, seemingly not sparing themselves any work when most impressive or valuable blocks lay in the most random of places, out of view, for reasons hard to fathom.

This positively screams automated production, with which no amount of manual skills can compete. Moreover, the microstructure of limestone blocks, the presence of air bubbles and some other organic material and parallel magnetic alignment prove their manufactured origin. This doesn’t preclude the necessity for machines though, as attested by the hundreds of tunnels with right angles and vertical shafts into the bedrock. Moreover there are plenty of quarries covered with marks identical to those of modern rotary drum cutters, with half-finished works in it including the heaviest blocks, the two unfinished obelisks of Aswan (1100 tons and an estimated 1500) in granite. Some boxes even show the veins of different minerals: something impossible with geopoylymers/reconstituted rocks.

Furthermore the presence of deep overcuts can only mean the material was in solid state and could not simply be melted back or its cuts conveniently filled with composite rock paste. Simply put, evidence indicates both manufactured stones and better power-tools existed, showing a physical mastery ahead of ours on every accounts.

Ancient Quarry Mount Nokogiri Excavated by Prehistoric Machines or Hand Tools?
Comparison between marks at the Nokogiri quarry and modern rotary drum cutter marks

And then there is the superhuman precision to see and maintain curved, symmetric geometric patterns over a multi-tons statues and duplicate it dozens of times perfectly, necessitating a computer… or something with the capability of one.

Contradictions to the laws of historical development

In our opinion it is not losing high technology due to whatever circumstances that is unconceivable, but the inability to replicate it in the course of a single millenia. What differentiates modern science born only those last 200 years from the slow steady and mostly haphazard piling up of improvements and discoveries that characterized progress so far historically, is an actual understanding of the underpinnings of matter.

The long accumulation of skills alone does not come close to accounting for the precision of the pyramids. Accurate measurements were necessary. It would imply the development above and beyond anything seen before the XIXth century (in some cases still unseen) of whole areas of science, power production and transmission, the teaching of advanced chemistry and materials physics.

Comparison between megalightic constructions and misfiting writings
See the figcaption

Yet the following points discard the hypothesis of a standard “high tech” civilization:

To solve this paradox we must envision a very different path to progress with drastically few people and none of the hick-ups that marked our ascent (heavy metal pollution and global deforestation well before the industrial age). Technically most Egyptian artefacts could be explained by human-powered machines11 and a (inhumanly higher) level of handicraft. But many other things of sheer material power or intricacy, can not.

A magic-based machinery

Serious honest researchers would rather settle with inconsistency than accept (at least publically) an even more disturbing and harder-to-swallow alternative: the notion of an entirely different model of civilization, based on psychic abilities. This would change two things:

As such a culture would present positively strange aspects: an explosive period of development so quick as to leave archeological record, giving the impression of a people coming out of nowhere… or somewhere else, and the state of technology of that culture would at odds with itself, in some aspects breathtakingly advanced with a properly magical efficacy, in others areas as primitive as wished, opting for the most satisfying solution to any need with little concern for our logic. Visions would provide usable chemical recipes to produce geopolymer from local plants and minerals without needing to learn about physical chemistry.

Beginning and end of an era

So, when did these lost civilizations start, who were there and when did cultures using actual magic in their every day life cease to exist ? We notice that all the most impressive megalithic sites, judging from either the weight of blocks and/or their precision (usually both) from Petra to the Osirion to Chinese granite “tombs” and gigantic Yangshan Monument, all display the same machining marks.

Our whole principle of reasoning, involving magic, constitutes an inversion of the usual principles applied in archeology. Normally simpler, less technically challenging (more primitive) forms are deemed older, while we deem more advanced remains to come after on the timeline.

In this precedent historical cycle based on magic, because visions do not involve a accumulative process the exact opposite happened: while achievements demonstrating the most impressive technical mastery (biggest blocks, most precise architecture) appeared at the earliest then degenerated into allegedly smart but less materially impressive architectures featuring smaller, more manageable blocks. While logistics can be a product of the intellect and improve over time, the weight of blocks however, seems to us the element most requiring a direct usage of psychic powers. Ergo in all likelihood, especially when they appear in numbers, the biggest blocks (in excess of a 400 tons) likely were the products of the most advanced and psychically advanced, hence the earliest megalithic civilization.

And in Egypt the persons represented by the biggest statues exceeding 1000 tons and the most magnificient polygonal walls indicative of the absolute peak of cyclopean ruins world-wide… wore cloths.

Neandertals would never, ever need cloths in that part of the world, or anywhere for that matter. Nor would we. This indicates “shame” or “modesty”, the sense of genital organs having a special visual effect (or that effect needing some reining in). Thus it entails the people behind the most ancient and magnificient megalithic monuments cooked food already, fitting with Egyptian and Sumerian lists of mythical or divine kings going back to 37 000 years.

It makes sense that Neandertals in full possession of their powers, as we see below, would never even need or think of building anything, but treated the entire world as their bed and playground. Machines mean nothing to someone who can do everything they can do and more.

Only later must we have come to rely on tools to do our bidding. Our power of imagination dwindled along with our brain size, and as Wilhelm Reich said, “pleasure is energy per se”. A Neanderthal Mose, when striking the ground with his stick to mend to the Israelites’ thirst, would not consider the intricacies of how to make it happen, just where it can (or rather, have to), before thinking “LET THERE BE WATER !”. In modern terms, if Atlanteans were “techno-sorcerers”, then Neandertals must have been full-fledged magicians or reality-warpers.

The myth of the Flood refers not to a singular event, but condenses the whole period of the Younger Dryas Boundary (circa 12,900 to 11,700 years BP), including numerous catastrophic events, from the direct impact of a big meteoritic strike on the Laurentide ice sheet in North America to the numerous airbursts resulting in widespread fires burning a 9% of Earth’s surface biomass and catastrophic outburst floods. Some myths kept the meteorite impact (Plato), some the floods, or the fires, or both. But each time the catastrophe ended a world.

People must have lived off their metapsychic capital, only stalling entropy until the YDB sent a message to them they interpreted as the moment to exit the stage of history, knowing they had lost the magic of their ancestors and could not replicate their automatons or operate them. When the meteorite struck the isostatic post-glacial rebound sank the Azores’ continental shelf of the Azores which might or might not have hosted the literal Atlantis from Plato, it merely affected a decadent people whose “human blood”, as Plato said, had already taken over the divine blood, their higher magical nature compared to primitives hunter-gatherer populations.

Even with little magic, their conventional knowledge in agriculture, masonry and basic chemistry would have been enough to subjugate all primitives, and outbreed them in few generations. But knowing their very identity had lost meaning, they chose to leave on their own accord, seeding here and there seeds of culture and leading eventually to a whole new cycle of civilization built on cereals, degenerate sexual instincts and the virtual absence of psi powers.

Gods of Legends

Thanks their more advanced brain and the unfettered access to energy, to us the inescapable conclusion is that pre-cooking Europeans once lived as gods, on paar if not above most wizards of fiction.

Our whole instructivist approach is predicated on considering that anything even a single one of us can do now, if dutifully attested, men could do then, much better and/or more frequently. In nature animal psychology stems directly from genetic programming, neatly adapted to its natural environment and bodily constitution (which includes the brain’s structures) for millions of years. Under a natural diet, humans should be no different: they should obey their innate programming: this justifies looking at the best and brightest of our race as not at the ceiling, but a hint at the potential all descendants of advanced humans share.

Regards to lower races (Africans and related species), not enough has been confirmed by credible witnesses, let alone statistical tests, to support the potential for more than limited Psi abilities in Africans, a notion congruent with their utter lack of imagination, low IQ and the fact “Homo Sapiens” did not change anatomically since the first apparition of the species in the fossil record 300 000 years ago, according to mainstream scientists.

In the short time the rawfood and metapsychanalic experience lasted, we were fortunate enough to observe that if awakened soon enough in a child, visions fuse with the normal sensory fields and functions constantly without conscious prompt, slaming wide open “the doors of perception” as per the wish of Aldous Huxley, but completely naturally.

In a world where nothing bad would ever happen, no true accident, no wanton murder or destruction, no war or Jewish plot to breed us out of existence, there could be no better motive for magic and no better way to further the collective destiny than to enhance our consciousness by seeing more, hearing more, touching more of the world’s infinite beauty. All that we talk about before must have been commonplace. We lived in a magical world beyond our wildest imagination, in fairy tales.

Higher brain functions

For starters, all autonomic functions in reach of nerves or hormonal influence likely could be controlled at will: the extransensory explains more than the outwardly magical, lying inside our very brain. Kim Peek, the most famous mega savant, despite a brain grossly damaged at birth, developed abilities including: perfect recall by age 412 and perfect reproduction of a scene or song, reading one page per eye at once, mirror reflections of pages, or spinned with an arbitrary angle, all instantly, “hear” TV dialogues with the volume muted13. He never stopped learning over the years, including new skills and new abilities, spontaneously.

Explanations amounted to his the hypothetical use of other parts of the brain due to his very disease, but the idea that normal capacities are somehow traded in for special talents is contradicted by his constant learning with no concomittant loss, while his mere survival past the age of 3 as anything more than a vegetable—like 100% of children with agenesis of the corpus callosum—was already a miracle. Furthermore, in general 9 times out of 10 autists are sub-standard morons with no redeeming talent. It is true that in some cases, young savant with autism loses their special talent when they learn how to talk, but not generally, nor was it the case there..

It feels like the brain can access an extradimensional infinite storage space and computing power, the same dimension we access during Out-Of-Body or near-death episodes. Energy fundamentally changed the game. Hence all genius and most effortless extraordinary abilities, unlocking many aptitudes within the purview of our senses but running contrary to some basic subroutines of perception and genetically wired algorithms, and normally hard if not impossible to access by our conscious mind.

We posit that it functions through storing and retrieving information of a visionary nature in the infinite spiritual dimension, offloading computations of arbitrary complexity thus allowing for literally instantaneous results as well as retrieving enormous amount of information without the need for sensory processing. This energy-information then rewires our brain to obtain what animals must genetically give specialized chunks of their brain for, and normal people 30 years of their lives training for a single task.

The abilities and methods that yogis practiced for centuries and recently studied by science, like the mystical fire yoga (gTummo) prove that strenuous training can give a measure of access to usually unconscious metabolic functions. Such feats have nothing to do with actual magic14: on one side at most we challenge biology to some degree… on the other, we contradict the laws of physics. Yet one can wonder, why should natural functions (such as Lung-gom-pa, the capacity to run for 48 hours without rest) require such intense mental conditioning ?

Clearly we lack any hard-coded instinct for them. Maybe because we evolved to activate them through the extrasensory. Similarly, it entails that the comparatively punny intelligence most people demonstrate (even in the most evolved races) and the whole of developmental psychology, are what remains when we function on a purely biological basis without energy.

Reimagining love

Testimonies for centuries have been talking of powerfully transformative ethereal sexual encounters, intercourses with unseen or sometimes seen but untangible entities. In the Middle Age, that used to be said of witches (because said entities were supposed diabolical, so only witches would willingly summon them) and learn skills and magics from them15.

I realized that the “person” in bed with me—in front of me, I should stress—was a composite of various girls I had once known (Martyn Pryer agrees on this point) including my ex-wife, but with other elements not drawn from my memories in any sense. It was its own creature but seemed, as it were, to be using part of my own experience in order to present itself to me. I can only say that the experience is totally satisfying. From some points of view the sex is actually more satisfying than that with a real woman, because in the paranormal encounter archetypal elements are both involved and invoked, a rare event in normal everyday relationships. For forty years Benedict of Berne had kept up amatory commerce with a succubus called Hermeline.

Louis Proud Dark Intrusions, an Investigation Into the Paranormal (2010)

Making love at a distance would make true loneless impossible, wherever we may go, as long as we know energy. It only makes sense that sexuality, the most sensorially immersive area of experience for a human being would reveal a vehicle of choice for extremely potent spiritual epiphanies. Even recently stories of ghost sex and succubus encounters continue to emerge. Love is unquestionably the language of God.

The greater satisfaction derived than from mating with most real partners fully corroborates. In the ancient past, minds more developped than our own could visit each other in dreams or visions, making a joke of distances, and caress them with a thousand hands. This resonates with our deepest aspirations, which literature and increasingly so, video games express beyond the self-censorship of reason. In our unsconscious lies the desire and memory of making love with all our being, crossing the clouds and heavens above, entwinned in mind and body in a maestrom of sensations impossible to describe. We are born for more than the flesh can provide.

Unlimited power at our fingertips

On the material side of magic, if we sum up all the metapsychic disciplines that have gained a level of credibility through the years, we were nothing short of demi-gods. A level of telekinesis in the single to dozens or hundreds of tons was real and widely available too, as evidenced by the capacity of antediluvian cultures to move routinely monoliths in excess of a thousand tons were strictly impossible to move before recently.

Because those builders appeared to were both in small numbers and well degenerated already, it ensues it must have been all the easier for our original ancestors. This, levitation to any degree let alone outright flight, and the capacity to invoke but a fraction of the heat needed to vaporize a human body, inevitably removed the notion of physical or geographic barriers or accidents, or climatic difficulties. The entire world was our care for, play with and live in almost any environment we saw fit.

But the most powerful display of the power of the power of the extrasensory is another phenomena, one highly controversial yet which has been attested so frequently as to be thoroughly undeniable: SHC.

Each year a number of people spontaneously combust without apparent sources of fuel, alone, and most of the time die from it, usually in their sleep, indoors though there have been survivors, witnesses, and out-door cases too. All turn to ash, including bones, something thoroughly impossible as it requires an amount of fuel and temperatures not even found in professional crematoriums16 corresponding to 110 MJ or 30 kg of TNT, in as little time as half or a quarter of an hour. Moreover objects always burn selectively, without touching anything but the person or objects in direct contact, sometimes not even, seemingly cleaving body parts clean off or leaving the clothes intact, despite the abnormally intense heat reported. And with extrememly rare exceptions17 victims neither scream nor realize what was happening to them.

a magical world

We can not begin to conceive what that lost world was like. What is left to do when repression, war, misunderstandings, hate or neurosis cease to exist ? We are so used to define ourselves in negative through the strive against the imperfections of people and society, all brought for by human beings, we think so much as “good” as opposed to “evil”, that we came to see utopia as bland, unappealing.

We are so culturally conditioned to think of heaven as a respite awarding the good souls that raged against injustice, that we justify to ourselves evil as some kind of cosmically ordained challenge meant to give meaning to our lives. But what if all discourse that was only long-standing cultural justification for an wholly unnatural state of existence ? What if suffering and pain, or most of what we conceive of them anyway, were never part of God’s plan for our lives ?

What is left to experience when all known factors disturbing perfect happiness evaporate ? Can we be fully humans without knowing the full of range of human feelings, grief, the pain of anger and loss ? Do we not learn to better ourselves by overcoming so-called negativity ? All animals naturally still experience some of that, especially when they are not apex predators. Predation constantly push species to better themselves, to refine their bodies and minds.

Based on the split between great apes and other primates, we can estimate the relative end of predation to about 10 million years ago, with the first late Miocene apes whose brain size approached that of chimpanzees. If we accept the maximalist hypothesis that the vast majority of all causes of death in cetaceans, elephants and apes, which is to say all sapient species (cephalopodes and birds excepted), then this life of peace we are contemplating has been the shared inheritance of many if not most intelligent species for tens of millions of years, maybe much longer in the ocean. Do bonobos, gorillas and orcas or spermwhales find their life boring ?

Late Pleistocene Neandertals were gods. Even the smallest procognitive sense very easily attainable with correct food today, suffices to prevent any meaningless, destiny-impeding accidents. So it boggles the mind to fathom what kind of event could possibly bring about a true feeling of loss and emotional pain in the life of such beings. Even natural death, as rare as it would become for beings almost ten times as long-lived as us, was likely not seen as an end or impediment, when among the first thing we learn with a modicum of experience, is the immortality of the soul in energy, its transcendence over time and space. And how to fear death when talking with the deceased becomes customary ?

Add in more stronger powers and the world becomes our schoolyard, for as close to an eternity as we can conceptualize. Love lasted for centuries if not a whole lifetime.

And because we do not own energy, it makes sense that it exists in nature too, or rather could. In the air, in the ground, in the trees and rivers, deep down in mines, underground lakes or in households. Fairy, ghost, UFO (the 5% objectively impossible to explain away rationally) or werewolves sighting, Holy Mary apparitions, all share basic, transcultural characteristics across countries and eras, all hint at a forgotten realm of being above our own, now locked and faded. But it likely reigned supreme in the distant past, with magic everywhere, a world beyond our wildest imaginings.

If you can’t take a little bloody nose, maybe you ought to go back home and crawl under your bed. It’s not safe out here. It’s wondrous, with treasures to satiate desires both subtle and gross. But it’s not for the timid.

Q

Our crucial ecological function

After each deglaciation, plant biodiversity in Europe tended to suffer more than on other continents, due the position of mountains blocking the way for species that cannot redeploy from their climatic refuge zones once the ice retreats. If the techniques progress or can progress enough, we may well find evidences that biodiversity in Europe and Asia was more affected after the last deglaciation 10 000 years ago, than it had usually been previously, because of the Young Drias the disparition of Neanderthals.

Ice Age Temperatures chart

What if our ecological role readily accepted has always been to assist the rebooting of ecosystems after successive glaciations ? If you were extremely intelligent, practically immortal and more importantly instantly cognizant of any history or information you might ever need, wouldn’t assisting ecological relationships and planting seeds everywhere in a coordinated manner over a few centuries be child’s play ? Wouldn’t anyone feel automatically compelled to do so out of our innate love for nature ? And propagating seeds is what every mammal does after all, just without intellect nor planning; we humans on the other hand, just by collecting seeds and moving, always had the means to do the same on a far grander scale, across the whole planet if we wanted.

They might have been nurturing forests on a continental scale (fostering species that benefited the most from and sustaining richer ecosystems). We would most likely not notice it, time erasing irresistibly any past pattern of intervention. Nature on its own is haphazard and deals with averages. Like evolution itself, its intelligence is local, distributed, and does not mind mass deaths in the process, sometimes of entire species. But we can do better with our intelligence, we can act as the conscious control center and smooth out climatic transitions, avert global threats from space, whatever the planet would do if it could walk and talk.

With some imagination and time, nothing was impossible or requiring high technology, even taking animals for the ride and reshaping thousands of hectares of forests to our whim. Or desert, or toundra, or anything capable of supporting a number of individuals. The megafauna has a disproportionate effect on the environment, and on top we add intelligence in our management. Magic simply makes it fabulously easy.

By virtue of who we are, the ecological function of evolved humans is to increase biodiversity and evolution, to facilitate the growth of life, and that might have been particularly important in Europe through the Ice Ages. They could have used their powers to build artificial dams much like castors, move chunks of rich soil etc. Considering the capabilities of their diminutive descendants, the antediluvian civilizations, anything was possible for them.

Genetics

In this chapter, we endeavour to explain in layman terms as much as possible—or else introduce the necessary exposition—the elements of genetics on which we base our hope for eugenics, and the plan we laid out, using this knowledge. We mean to go beyond everything that has been tried before, in order to resuscitate the Neanderthals in us, to undo the degeneration that accumulated through the millenia.,

The laws of heredity

The current neo-darwinian synthesis is the scientific foundation behind both the materialism prevalent in our global civilization, and incidentally the global Jewish agenda to promote mass immigration and race mixing. Our only hope for the future to avoid further mixing with polluted bloodlines, and to regenerate our species the way it was before, is a species-wide eugenic program and a proper understanding of heredity. We can not wind back the pendulum. But we can spin it forward past midnight with our own hands.

Darwin who explicitly endorsed inheritance of acquired characteristics, would not see eye to eye with the current darwinist religion. His theory of gemmules, small particles that he assumed would carry relevant influences through the body and to the germ cells, was explicit endorsement of Lamarck’s inheritance of acquired characteristics, foreseeing exosomes and retroviruses before any notion of mutation or even nucleic acids existed.

The question revolved around not the fact of natural selection, the survival of the fittest, but the source of variations in nature, or in our language, mutations, and it was obvious to Darwin that it had to be internal. But contrary to modern pandits it was then that mutations had to be produced in the confrontation with the environment, in what we now call a feedback system. He also realized how mixing species led to dissolution of their respective qualities, while crossing an animal with its own preserves its special characteristics acquired with difficulty.

Then August Weismann and Francis Dalton at the turn of the 18th century spred the dogma-with no experimental justification whatsoever, that the only alteration life could impart to the physical vectors of heredity were pathological, degenerative. As a result, today the “modern neo darwinism synthesis” is built on the two same pillars of chance mutations (leading to the neutral or quasi-neutral theory that most or molecular evolution is non-adaptive or random) and natural selection.

Chromosomes
See the figcaption

Around 1900 the “laws of Mendel” were established and summarized our basic understanding of heredity until now. It includes law of dominance and uniformity, the law of segregation and the law of independent assortment. We learnt that the distribution of traits mostly takes place during gamete formation. Our whole DNA, the vehicle for most of our heredity, lies in 46 chromosomes (for humans) making for 23 pairs according to similar structures. During meiosis, homologous chromosomes physically bond, then each pair segregates and migrates to opposite ends of the nucleus, forming two gametes with half the number of chromosomes until fecundation restores the pairs. During bonding, exchanges (crossing-overs) occur over varying distances, maintaining—mostly—the same order of genes, but reshuffling (or not) matching sequences within a pair. As a result each daughter cell… each spermatozoid or ovule is a unique combination.

Of these three laws, none stood the test of time, both in normal and pathologic. Since we know the molecular mechanics of gene expression, single gene determinism appeared rather exceptional, most traits being multigenic and conversely, many genes pleiotropic. Early researchers suffered from selective perception and still do, rejecting data they could not make sense of yet in the name of dogmas and preferring nice consistent (but ultimately false) theories to unclear facts.

Mendel focusing on very simple traits in very simple genetic systems also contributed to an extremely reductive definition of genes as mere recipes for proteins with little to no influence from the environment. The dichotomy between dominant/recessive is very simplistic, leading to the definition of epistasy (the interconnectedness of genes) and QTL.

It is taught that four evolutionary forces are responsible for genetic changes: Random mutations, Natural selection (death of the less adapted or preferential breeding of the best), genetic drift, and migrations mixing things up, spreading and/or disrupting new genotypes.

But this is all wrong. Evolution is not driven by chance, and mutations, to a large degree at least for vertebrates, are random copy errors. As we will see later, natural selection, the projection of the liberalist fight of everyone against everyone projected onto biology, has been overhyped, undoubtedly for ideological reasons. Evolution on many levels is intelligent, the fruit of very complex feedback mechanisms in interaction with the environment, with our life and choices. It is not to say that competition and the survival of the fittest have no role, but it is much more limited than assumed before, and the more complex the lifeform (the longer-lived often) the more darwinism is reduced to sorting out newly evolved adaptations. Until, as with humans and many marine mammals, it has become completely obsolete, replaced by higher forms of selection such as sexual selection.

The quantal structure of DNA

Our DNA has been built over millions of years, forming adapted gene complexes whose unity is crucial to conserve adaptations through epistasy for an optimal result in a given environment. But how could these sets be conserved, if at each meiosis everything is mixed up ? It has been believed until now that statistics and linkage sufficed to account for this: proximity would ensure sufficient linkage and conservation of haplotypes, and philopatry, breeding close to your birth place within your original stock (a fortiori your own family) will limit admixture and the splitting of coadapted complexes, while increasing statistically to reappear at each fecundation the closer your mate is to you.

Natural selection would have over time put genes functionally interdependent close to each other, maximizing their linkage, preserving their unity over generation. Population genetics teaches however that free recombination should prevent such packaging of polymorphisms. Quantal genomics explains how those genes can stay bundled, by actively maintaining this clustering and those polymorphisms, on a much wider scale and frequency than entailed by distance only, by prohibiting recombination and mutations in a certain zone. Ancestral Haplotypes are specific sequences of hundreds to millions of base pairs in length conserved identical across hundreds to thousands of generations, counting sometimes in millions of years of age. Some haplotypes are 99.9% identical across all their instances with less than 0.0003% divergence, as opposed to 10% between different haplotypes. This discovery invalidates Mendel’s law of assortment: our cells are not blind as to the information they contain, DNA has structure and layers to it and we inherit those sequences en bloc like supergènes.

They represent alternative versions of Polymorphic Frozen Blocks (PFBs, continuous sets of genes in which mutations and recombinations (often less than 1% of recombinants within a population) are effectively suppressed. Unrelated families display the very same AH throughout the world, hinting at an origin millions of years remote. These blocks often regulate genes expressed by cis (on the same chromosome), trans (on its homologue) or epistatic interaction (with another chromosome), as one supergene. Recombination takes place only between PFBs.

How frozen blocks are maintained is due both to active mechanisms with hotspots for regulating proteins concentrating repair activity, and to the polymorphism self-maintaining itself: to warrant a crossing-over, a minimal level of homology (similarity) must be reached, lest the MMR system detects a mismatch and unravel the DNA heteroduplex. So the more divergent AHs are the more prohibited recombination between is, exponentially so beyond a value of 3.4-3.6 bp per 1000bp (1/400).

The precise definition of PFBs and ancestral haplotypes is somewhat complex in reality, the same authors happen to give two definitions, but there is a reason. The first definition sees PFBs as analogous to genes and AHs as alleles (different version of the same rigid structure), but the other, closer to the truth, sees AHs as rather containers, and allowing inside themselves mutations between contiguous PFBs and recombinations between homologous ones. In short, in the simpler definition the barriers to mutation and recombination are the same.

The reality of the MHC looks more like islands of stability[^ici] with different AHs encompassing smaller or larger (discontinuous) sets of them, with mutations free to occur between the sets. What happens when different haplotypes meet and recombine, if they do so, is unknown.

The reality is that the frozen blocks occupy only a limited proportion of the whole MHC region of a megabase or more and it is not possible to define hard boundaries between frozen blocks and areas subject to recombination. There must then be degrees of freezing as well as specific hotspots.

Major Histocompatibility Complex (MHC) in Health and Disease (2020)

Another cause of those size differences between AHs, beside the method of measurement, might be the degree of mixing in a population. For instance, the AH 8.1 is measured at 4.7 Mb, others up to 14 Mb in some breeds of cattle. Haplotypes were fixed in the different subpopulations of humans and stop changing for the most part, they must have evolved independently for quite some time before acquiring their fixity, however mixing leads to an erosion of those boundaries, undoing the work made by many generations in polishing the set of intercompatible alleles in functional groups. For simplicity’s sake, we will use the simpler definition.

Such identity usually implies identity by descent (IBD), but the conservation of such big contiguous sequences over hundreds or thousands of generations rules it out. We must invoke active, controllable features protecting from mutations as well prohibiting recombination in the PFBs while managing in between them. We have to seriously reevaluate for the possibility of any real divergent evolution on the basis of natural selection for the past 100 000 or so years. Why or how is there so much allelic diversity, clearly functional and not neutral, if the selective value is rarely more than weak to imperceptible ? How can selection “see” what has little effect ? It does not, for selection “sees” only haploblocks.

Indeed, explaining the conservation of such extended blocks in populations so remotely related (essentially all races included) would require the fallacious conclusion that there must have been successive expansions and contractions millions of year ago, with pre-existing retroviral immunity failing at each step, resulting in successive epidemics wiping out every time most of humanity down to a few thousands or less, and through trial and error an effective symbiosis would come to pass between host and viruses.

Beside assuming various haplotypes never in the same body in a population for eons would complement and not short-circuit each other, and the fact no such bottleneck events have no wider genomic evidences backing them, this fairy tale falls on its head because it endorses germ theory and assumes the relationship with viruses is a priori adversarial. But such mass death episodes are equally unknown in the wild despite the ever-evolving nature of pathogens. If they could happen, and happened enough to shape the genome of every single higher vertebrate studied so far, they would happen regularly and we would see them in the wild, but we don’t. The only important epidemic episodes observed are tightly linked to either human presence or very rare, local and temporary traumatic events of climatic change, unlikely to mark a whole species’ DNA definitively. But there is a proven way to allow a fast circulation of haplotypes exists within a population, much faster than normal heredity and that does not involve non-existent mass deaths: sexual horizontal transfer or the size effect.

Adaptations are intelligent and endogenous

The Central Dogma of molecular biology states that biological systems can not modify their own heredity purposely, meaning that information cannot flow back from proteins to RNA or DNA. The dogma dictates that inheritable change can not originate from the environment in an ordered manner (not a degenerative or pathological influence). Francis Crick stated this dogma in 1958, but this is just a revival of the Weismann barrier, and no less an article of faith. Epigenetics came as the first major adjustment, albeit one that does not involve the DNA sequence but instead reversible chemical modifications on the cytidine methylation, depending on the environment, diet, diseases, etc. Those fine transient chemical tunings can remain stable over dozens of generations in animals, and because methylated cytosines have a higher spontaneous mutation rate, modifications seldom cause permanent change over many generations. Mainstream science now acknowledges the fluidity of DNA and its considerable sensitivity to environmental variations, and the mediatic success of epigenetics was hailed as a “reasonable” resurgence of neo-lamarckism.

In reality this “soft” inheritance does not entail any transfer of information. If the environment can produce functional somatic mutation, the modern darwinian synthesis rules out any feedback from somatic mutations to the germline, so any influence of the environment would need to touch the germline itself; as is the theory itself, excluding a some-to-germline connection, likely is not sufficient to account for the findings used to illustrate it.

Integrated lamarckian inheritance schema

Work by the likes of Ted J. Steele went much, much further in uncovering the molecular basis of the mechanisms for the heredity of acquired traits. Direct evidences prove the existence of one (or several) dedicated pathway to convey and integrate in a controlled manner, genetic information from the body into gametes, suggesting moreover exosomes as the carriers of the flow of informatio.

Soma-to-Germline Transmission of RNA in Mice Xenografted with Human Tumour Cells: Possible Transport by Exosomes (2014)
Image Cosetti article dna transfer cells to sperm

But the most critical blow to the dogma came in the early 1990s with the discovery of SHM, which is the most important mechanism by which immune system cells (lymphocytes B) can produce an exponential variety of antibodies out of proportion with the smallness of the genetic repertoire inherited from the parents (the V repertoire). Lymphocytes B routinely undergo localized and controlled rounds of mutations, then induce the replication of lymphocytes responding to pieces of pathogen-derived antigen brought by antigen presenting cells, and kill cells reacting to Self- antigens lines to eschew autoimmunity. This was nothing short of an accelerated and domesticated evolution process in the controlled setting of secondary lymphoid organs.

But Steele revealed the ubiquitous trace in the IGHV that encode the base for antibodies, and the MHC that encode for all the receptors for abnormal molecules, pathogens and intracellular parasites, of retrotranscription. This means sometimes the new combinations produced are sent down to sexual organs to integrate the germline to update our stock of MHC receptor or immunoglobulin heavy chain pseudogenes with recently relevant solutions. The scientific community retorted that the yearly infections of polio, flu etc disproved inheritance of acquired character. But with the new theory of viruses the hurdle clears out: The body does not adapt to fight viruses, but to tolerate them and work with them efficiently in the task of cleaning of denatured molecules, which only create symptoms when there is too much of them in our diet.

If bacteria can fine-tune mutation rates to survive antibiotics or food scarcity, so should animals, being incomparably more complex, possess systems to assess their levels of genetic homozygosity and in conjunction to stress increases the rate of mutations and gene conversion (one-way crossing-over) to generate diversity where there should be ? This is backed up by experimentally, and supposed to be the main factor behind the observed MHC polymorphism by generating new haplotypes in relation to the degree of homology. But because the heterozygosity does not by and large correlate with fitness in vertebrates, environmental cues have to play a role too.

The notion of genetic anticipation allows us to solve many mysteries, namely the persistence of high polymorphisms in systems that yet do not react in any clear way to selection. Organisms seem capable in a few generations to freely adjust the levels of diversity, increasing it or maintaining them to very high levels despite very high consanguinity in what are called self-identified genes supposed important to avoid autoimmunity. On the other hand the same genes in other species do not appear subject to selection, but rather evolve like a neutral gene, even though the corresponding receptors are considered crucial to fight effectively against intracellular parasites according to pathogen-driven evolution theory. Despite the usually high polymorphism in immune genes, most higher vertebrates show a remarkable lack of genetic response to new pathogens with no discernable loss of fitness meant to accompany a depauperate genetics. Nor do those populations hurry to generate the missing diversity even after thousands of years. Even supposedly neutral hypervariable loci, for which the probability of genetic identity is commonly <1 in several millions, can stay identical over vast populations, contradicting our models. Low polymorphism in MHC-II and MHC-I should definitely be associated with lower resistance to infections, and problems in distinguishing kin from non-kin, but it does not. Monomorphism in mammals most of the time fare just fine without ravaging epidemics.

Individuals immediately lacking the optimal alleles in the first generation exposed to a novel pathogen would be at a slight disadvantage, but their body through SHM would come up with the appropriate combinations, then feed them back up to germline cells. This explains the extreme variation observed (sometimes) in the MHC despite the difficulty or impossibility to observe levels of natural selection that explain it. This makes relationship health and intra-individual allelic diversity essentially non-linear and unpredictable in theory.

We see the same mark of deaminase activity resulting from edited reverse transcription on most SNPs (single mutations) which makes for the bulk of genetic variation world-wide. This means natural mutations are not random but guided. In this conception, selection still matters, but in a more limited way, by putting novelties of endogenous origin to test. We see those kinds of adaptations in systems where the relationships between parts and inputs from the environment could be mechanically translated in amounts of enzymes and substrates, in the manner of an analog computer. But there are also experiments indicating the possibility for much more complex information to be stably inherited in as little as two generations (always in relation to the very close consanguinity). This also gives a beginning of understanding as to the mechanisms of instincts.

Short-circuiting natural selection

We know since 1950 that independent assortment - recombination being a pure product of unbiased statistics - is not absolute with the discover of TRD cryptic female choice or meiotic drives. Models show that very weak biases (around 1% of transmission of one chromosome against the other) can have a tremendous impact on the genetic structure of a species. Opinions are unclear as of yet because TRD is an umbrella term including all phenomena altering the expected mandelian proportion at birth. Some simple systems have been observed to behave like selfish genetic elements, like transposons, helping themselves with no apparent benefit for their host, “cheating” natural selection, tampering with recombination, which leads to an accumulation of genetic waste.

Transmission ratio distortion: review of concept and implications for genetic association studies (2013)
Diagram of mechanisms behind transmission ratio distorsion

However the same was thought before of transposons, until we realize they like viruses have been “domesticated” since the dawn of time and become an inseparable tool of the genetic machinery, to the point of making up 50% of our DNA. Another school of thought believes that more complex drive systems will eventually be discovered and it is likely to be much more common with a much more extensive influence than previously thought.

Illustration of meiotic drive
The Ecology and Evolutionary Dynamics of Meiotic Drive
See the figcaption

TRD includes haploid selection (mostly sperm competition) and embryo selection and expresses the sorting of the favourable and unfavourable sperm by the female body based on its own genetics before meeting the ovule (possibly at that moment too). It is indeed the way nature has found has found many ways to “cheat”, which is to say expedite, the tedious and wasteful process of natural selection by displacing the sorting onto the haploid phase (the sperm and ovules), and then testing the embryo before birth, cutting costs as soon as possible. Sperm selection both within one’s own ejaculate) as well as between the sperm of different males with skewed ratio from 30 to 97% in favour of one semen over another. In the case of inter-ejaculate competition, the presence of such biases between individually equally potent and fertile sperms indicates that this superiority is relative to a particular female body, which implies a capacity of its capacity to select compatible sperm with itself, based on the sperm’s haploid genetics. Intra-sperm competition relates more to the spermatozoids’ intrinsic quality. More than 99% of all gametes die before even fecundation, or at fecundation, or die during development, and the journey to the ovule ensures a sieving of the average 300 million sperm to only 5,000 sperm making it into the utero-tubal junction, 1,000 to the Fallopian tube, and 200 actually reaching the egg. And then, because ovas don’t live long after maturation, it is common for the sperm to miss ovulation and wait a few days in the tubes.

And the quality of spermatozoids matches that of the ensuing child. This is the secret of haploid selection, the ultimate cheat code of nature to sort over millions of defective combinations of recombinant chromosomes without investing more than the resource of a single cell, and retaining only the most structurally sound sperm in the process as well as ensure a degree of independence from environmental pressures: so that fitness may not need so many individuals to die by selection or conversely an easier environment for generations may not lead to an immediate decline. Due to the expression of a number of recessive alleles in the hemizygous state, selection has a unique chance to test the functionality of those genes while it would require normally homozygous embryos.

How many genes are subject that kind of selection is unknown, but fact is the resulting offspring obtains a longer life and better health in every trait observed, the effect on embryo viability carrying over into the second generation in both sexes. It is believed, and sometimes known the female environment can selectively bind sperms it detects as genetically compatible, protecting it, storing it, and discarding the rest.

On the other hand open water is very aggressive and blind so most fish must invest a lot on motility too, probably lowering the correlation, which could be much higher in internal fertilizers like us.

We know very little about mother-fœtus communication, beside the role of exosome exchanges and its complexity, especially in the early embryonic stages most concerned with spontaneous abortions. Miscarriages or spontaneous abortions are a natural cleansing function more elaborated than usually assumed. The mother’s body involves itself to weed out the unfit, replacing advantageous selection by predators while sparring the mother with the need for the costly expenditure of resources that is pregnancy. While this does not discard extensive screening of fetuses for all known markers of disease, instinctotherapy is likely to improve on that we mentioned, by ridding the body of the chemical noise likely to hamper the doubtlessly very complex mechanisms that represent its molecular intelligence.

Another path to altering the short-circuiting natural selection in the propagation of polymorphisms, is the size effect. Bucks and a number of other species have been demonstrated passing on immunity to myxomatosis to sexual partners without or without fecundation through intercourse with immunized males, then for females to give that immunity to their offspring. This and other observations (some times dutifully reproduced) mandates the necessary rewriting of all our conceptions about evolution. A high frequency of horizontal transfer would turn upside-down our understanding of evolution and genealogy.

Sperms have been found to penetrate into the uterus wall and glands, the cervix, the fallopian tubes. It has been found that the cells of younger females are more permeable to foreign sperm DNA than the cells of older females. There is also evidence to suggest that the lymphatic system is used to carry sperm throughout the mother’s body, causing male microchimerism in women. That same, newly-updated information could then be passed on to sexual partners18. The body should also propagate the change it has acted not just in one gamete, which would have basically 0 chance to pass on that information to the next generation, but to all or a vast portion of the ovules/spermatozoids or better yet germline stem cells, at the right place on the right chromosome, and if we want a definitive change, on both homologues. A certain footprint associated with crossovers might indicate just that.

Genetic fixity versus fluidity

Superficially there seems to be an opposition between the fixist view of DNA we would draw from the quantal structure, and on the other hand the open-ended, constantly updating characteristics which identified Lamarckian processes imply. The existence of dedicated endogenous pathways to derive mutations from the environment can be expected to occur and target the germline throughout our life if not constantly. Yet polymorphism appears mostly conserved, while individuality is inherited.

A reduced lamarckism view would also overlook the importance of races and inheritance as a whole, giving the impression that populations adapt over time no matter what. But this is demonstrably false with human populations such as Inuits, who despite living in the North for about 6000 years, haven’t got a whiter skin—even though this is hardly a major morphological hurdle, as Whites can tan easily— and adapted enough to avoid vitamin D deficiency despite whale meat, raw in particular as they do, having lots of it, lots of everything in fact. So while germline retrotranscription and wide genomic rearrangements are certified facts, they do not infirm the macro-observation of the stability of racial traits in human populations in spite of thousands of years of exceedingly harsh environmental selection.

This is explained by different parts of our genome just having different roles, thus evolving at different rates. Moreover, the traits most important for survival (physiology, higher brain functions, sensory organs) are usually so polygenic, resulting from such complex epistatic networks, and so robust in their expression, that in effect they behave phenotypically like rigid, monolithic blocks invisible to selection, stifling any rapid adaptation. Selection would then only function between populations that diverged for long enough. In short, as we said for immune genes, some sequences are very sensible and conserved, others extremely dynamic and represent the fine-tuning of any system and integrate the first set only sparingly after a long time. Hencewhy only 10% of our DNA appears to represent ancestral haplotypes, excluding many very functional and important genes. And likely the dichotomy merely simplifies a complex reality.

Besides, without a programmed source of mutation, quantal evolution becomes theoretically unmanageable and leads to absurd conclusions. If all innovations are accidental and very, very rare and no new adaptation occured beside reshuffling haplotypes, then to obtain the haploblock structures humans possess while sticking mostly to darwinism, we need to invoke regular episodes of retroviral infections nearly wiping out our species dozens of time.

They would have rearranged our genome as they saw fit, with just a few thousands if not hundreds humans world-wide (including all races !) having to repopulate the planet again and again, while the intelligent protective features of PFBs would appear mostly as a byproduct of this, in the spirit of genetic hitchhiking. Without means to adapt the haplotypes themselves in a controlled manner, this is the logical conclusion.

This is nonsense and no such bottleneck is backed by studies. Retroviruses do not function like this (not in vertebrates, that is), and do not cause wholesale genocides in nature. This would also fail to explain the packaging of genes totally unrelated to immunity.

In defense of inbreeding

Photograph of Helen King working on rats

Extensive experiments were conducted in this spirit long before our modern methods and concepts of genetics and molecular biology were developed. Helen Dean King (1869-1955), showed that the failures and degeneration first encountered in her inbred rat farm were not only due solely to their poor nutrition, but also prove reversible as nutrition changed. Crucially, those diverse malformations and health issues characterizing by the 5 first generations of rats were also present in many from the stock control group which was neither inbred, nor selected for strength and fertility: inbreeding just made them worse. She remarked deftly that “if the experiments had been discontinued at this point [before the change in diet] the results would have been a confirmation of the conclusion reached by Darwin and by several others”. This differential effect of diet on inbred strains is a well-known fact but rarely taken to its logical conclusion. It has not occured that all Standard diets for lab animals might be amplifying the detriments of consanguinity and diseases alike in zoos or laboratories, because no one can see that for themselves19.

When mutations are inherited from one parent only, they usually do not express so the issue is naturally when family members breed with one another, giving their child a high probability of inheriting the defective gene on both genes of a pair of chromosomes (homozygosity). Due to epistatic interactions consanguinity is likely to amplify metabolic complications in a multiplicative rather than additive way, increasing further the strength of selection. Homozygote backgrounds are always less tolerant to abnormal disturbances such as remaining genetic flaws, or unnatural diets. We do not tolerate what could doom the species: a species is safer when individuals die rather than mutate excessively.

But not even Helen King thought that far, only went as far as removing the milk and biscuits that plagued her first generations so much they could breed enough to apply any measure of selection. She could not think of avoiding cooked meat in her improved rat regimen. Yet despite all those adverse conditions, laboratory mice tolerate inbreeding to the point of virtual cloning for generations without side effects. Another inherent strength of inbreeding is the conservation of non-additive genetics.

Due to epistasy and trans interactions, all complex traits are highly multigenic and might not be possible to isolate from other traits in a mendelian fashion. The best way to preserve those traits, depending on precise combinations, is group or family selection: one gets a much better idea of a person’s hereditary potential by looking at its family and breeding the family as a whole.

The practical definition of genetic load is the reduction in the average fitness of a population due to the presence of deleterious genetic mutations compared to an ideal, mutation-free population. It represents the burden imposed by harmful alleles that lower survival and reproductive success. Genetic load can arise from various sources, including mutation load (new mutations), segregational load (harmful recessive alleles maintained in a population), and recombination load (disruptive effects of recombination). In practical applications, genetic load is relevant in conservation biology (where inbreeding can increase genetic load), medicine (as in genetic disorders), and evolutionary biology (affecting natural selection and adaptation).

Beside de novo chromosomal anomalies such as trisomies which always result either in stillbirths, early death or sterility (hence can’t propagate), the grossest genetic flaws consist in recessive alleles who presence in the heterozygous state (one allele flawed over two) does not cause death or a major trouble. It has been established that more than 70 % of the genetic load is made up of these alleles. When an allele is fixated – meaning all chromosomes in an entire population include it – there is no possible selection or improvement – save for new mutations to arise – as there is no possible variation between individuals, over which to select. To select the healthy from the flawed, there needs to be a difference between the two ! Hence, to remove defects, one needs to either isolate them by amplifying their detriment in the homozygous state (then weeding out those bloodlines), or generate variation by concentrating their overall proportion (in the whole genome) in a select number of lines.

Both endeavours are achieved through inbreeding. Among the same average genepool mating close kin has the automatic effect of heightening variance, depending on the existing allelic diversity. According to fossil Neanderthal DNA our ancestors were extremely consanguineous, with long streaks of homozygosity and individual uniformity on a chromosomal level, and confirmed cases of proper incest. With consanguinity degenerate bloodlines die out (mostly in embryo, or through predation, natural or artificial selection) whereas others survive and become healthier, purged from many detrimental mutations at the cost of some diversity and the fixation of sublethal alleles.

On the other hand, hybrid vigor, also known as heterosis, is the phenomenon opposite to inbreeding where hybrid offspring exhibit greater biological fitness, growth, fertility, or survival compared to their parents. It can occur when genetically diverse individuals are crossed. Heterosis is the major argument in favor of maximum diversity and minimum inbreeding. But there is reason hybrids never breed true: in a few generations, as the distinct co-adapted parental gene complexes recombine and break down, causing an outbreeding depression in the F2, and the return of homozygotes. Heterosis reflects chiefly associative overdominance, which is when heterozygosity on one gene statistically causes the masking of nearby recessive alleles, close enough to segregate with the genes of interest.

In the animal world, it is the universal occurrence of incest of all types (depending on the ecology of the animal) that we observe, not its taboo, a fact obvious in the early XXth century but now forgotten. Honest research is still done though, braving the terrible omerta on the subject. From 1868 to 1963, it was the unanimous scientific opinion, shared by all sensible minds including Levi-Strauss and Leslie White20, that the incest taboo had no biological foundation and inbreeding was harmless to excellent.

Darwin and Westermarck were ridiculed in their time for their fear of consanguinity, as excellent results of husbandry and cross-breeding were considered self-evident and definitive. Then strangely the tide turned with definitive statements such as “the ratio of deleterious and lethal recessive genes to selectively advantageous genes is very high indeed” and “the biological advantages of the familial incest taboo cannot be ignored” generalizing, without scientific basis, and ignoring the wild disparity in genetic quality between both ethnicities and groups within them.

Meme about centuries of incest among Pharaohs

Hence once those purified bloodlines have been obtained, it is advantageous to mix them with each others, to increase variance and diversity again, and remove those other bad alleles as well, as each consanguineous line is susceptible to fixate by chance (or genetic drift) a different set of unwanted traits, not present in others. Three factors determines the efficiency of this protocol:

This is how inbreeding enhancement works, either slowly in nature or quickly and methodically as breeders have been doing for centuries. Lasting inbreeding not only occurs frequently in natural populations but is characteristically adaptive in many social, territorial, long-lived, low-birth-rate populations, which are all both philopatry (living and reproducing in the same place) and endogamy (reproducing with those close to you genetically): low dispersion is the norm, in plants and animals, as opposed to the alleged biological imperative to “spread and conquer new territories”. In this context inbreeding ensures more inclusive fitness (individuals passing more of their own alleles to their offspring by mating with close relatives), drastically reducing both mutational load (through purging) and segregational load (passing on less inadapted/unfaithful genomes) in a few generations.

Complex adaptations originating from a large number of genes, are established by selection on a very large number of more or less genetically isolated extended families or demes. The ecology of apes is very conducive to such a population structure: while living in very inbred small groups, species-wide chimpanzees achieve three times the genetic diversity in all human races.

Long life (in excess of ten years) and philopatry, hence inbreeding, both favor and necessitate the development of systems of epigenetic regulation. If the generational turnover isn’t high enough, the individuals themselves have to evolve an adaptability and robustness in their physiology and behavior to maintain the same in order to thrive in a variety of environment:

Adaptation to ecological conditions should not be limited to the allelic substitution in response to each fluctuation in the environment. A current alternative seems to be the fixing of complex epigenetic systems which respond adaptively to environmental flows in a phenotypic rather than genetic way.

Shields Philopatry, inbreeding, and the evolution of sex (1986)

It comes as a logical consequence of the MGD theory: Populations with greater genetic diversity can more easily rescue or tolerate harmful mutations, which would make it hard for natural selection to maintain the quality of a trait and to eliminate harmful variants. So, suppressing genetic diversity is necessary for maintaining traits at a high quality level and for removing harmful variants. The more complex an organism, the less tolerance for variation. Or rather—if we join Lamarck—the randomness of it: variations may still occur but with a higher degree of involvement of the system itself. Higher intelligence among species, in particular correlates with more elaborate regulatory networks and complexity-related sequences like miRNAs. Unsurprisingly as the GSA phenomenon revealed humans developped very keen preferences for incest, while chimpanzees do not care. This can be explained by our respective ecology: apes do not need advanced kin preferences, because of their philosophy or philogamy keep them close anyway. On the other hand, humans’ vagrancy (our ability to cross a continent on foot during one’s lifetime) and thus potential for dysgenic outbred matings is unmatched except by birds. So groups that did not develop inborn inbred mating preferences diluted and disrupted their adaptations and lost the evolutionary race against other more fine-tuned ancestors, which would stick together or reunite to breed, as do many birds.

The theoretical cost of inbreeding

Statistics or case studies on human populations always deal with countries only allowing limited forms of inbreeding, such as South India Pakistan and Japan, countries moreover of doubtful if not degenerate racial backgrounds refusing any form of eugenics. Incomplete selection leads to a constant stirring up the blood and blurring the line between good stock and bad stock. Recessive alleles must be allowed to express themselves or no selection can occur, especially not natural selection in the form of embryo mortality, today almost the only kind still playing a role in human populations.

Slightly inbred communities averages at 2.3 lethal equivalents, doing no better than wider outbred city population, which are more panmictic than any natural population, and as outbred as can be short of full hybridization. It appears the purging effect is so weak (due to an inbreeding coefficient of 6.25% as opposed to 25% for incest proper) a reproductive compensation of 2 suffices to cancel it. RC just means the number of babies a mother will produce to absorb early deaths due to a lethal equivalent. More and more specialists acknowledge now, that it is only incomplete consanguinity that is dangerous. The harder the selection, the less people are likely to just over breed to compensate for the losses.

Simply put, cousin marriage far from an ideal middle-ground is the worst, most inefficient form of inbreeding because it does not kill enough babies, leading to the preservation of recessives in the population and more babies dying and frequent diseases in “healthy” people in the middle and long term21. But the real problem is the attitude of population: 3/4 losses mean nothing in Muslim countries where women amount to little more than breeding machines and will happily pump out not 5 but 12 babies, not including stillborns…

We should place more value on Life itself instead of saving defective children and ensuring they marry and breed like everyone else. By preserving them, we undermine the gene pool for generations. We should not fear stillborns and abnormalities. But we can smooth out the drama involved with sterilization and early, prenatal screening, continuing what nature already does in-utero.

On the other hand figures show strong purpging schemes like real incest are not very affected by up to 2 additional babies, the most a sane white woman would realistically produce. We must deploy the full force of threshold selection on viability (killing or discarding any animal below given measurements of strength). Embryos and infants too unfit simply do not survive, and their contribution to the gene pool becomes 0. The ecological cost (lost child per female) incurred in a sufficiently inbred population from a varied enough (no too varied though) original gene pool, might not be significant.

The health effects of inbreeding depend on three things: the number and severity of recessive, deleterious mutations carried by individuals, the number of genes involved in co-adapted complexes, and those involved in heterotic selection. The theory predicts that the natural logarithm of viability up to a given age should be inversely proportional to the inbreeding coefficient, to the rate of de novo mutation of lethal alleles on one hand. So it all comes down to the rate of new (chaotic) mutations on one hand and the practical importance of the heterozygote advantage beyond nifty theoretical models on the other.

Let us start with the mainstream conclusion, which is ours too: Detrimental recessive alleles are indeed the main component of inbreeding depression, and if or when it exists, heterozygote advantage is very secondary.

Flies experiments showed even in absence of selection (so relying only on small population numbers to increase in inbreeding, as opposed to sib mating), inbreeding depression correlates much more with the speed of inbreeding than its level, up to a certain point22. If inbreeding depression was due mainly to the loss of overdominant combinations only the level of inbreeding, not the pace would matter. Instead a smaller effective population size risks greater reduction in mean fitness than slow inbreeding over similar inbreeding coefficients.

Note that the number of coadapted genes responding to inbreeding either positively or negatively appears very limited. Populations exist with large genetic distances whose crossing displays no outbreeding depression, while other seem genetically virtually identical yet whose crossing causes considerable outbreeding depression. Moreover, if the number of sensitive genes was high, achieving an adapted combination for co-adaptation in just a few generations would be impossible, however this is exactly the result showed with many model species, from the Wistar rats to Templeton’s Speke’s gazelles in just 3 generations.

The background segregational load, what is left after you purged all ancestral detrimental recessives, will reflect the genome’s spontaneous rate of detrimental mutations, calculated in lethal equivalents. The question is how many mutations are chaotic or programmed. Well, considering overall mutation rates are adaptiveneither constant through time or the genome and longer-lived species have better inherently better correction mechanisms. So considering the absence of mutational meltdown in clonal mice lines, the rate of errors inherited for higher vertebrates must be exceedingly low. So, once the purge has succeeded in purifying a genome there might be no new errors in dozens of generations.

Early viability selection alone allows (wild) mice to maintain substantial heterozygosity even after 20 generations of inbreeding, and little to no decline in fertility. Once we successfully edit out the detrimental alleles, there might be nothing to fear from even extreme inbreeding: 17 generations would mean in human time 340 years, or more if we breed later. Furthermore, King’s Wistar lab rat lines went on 50 more generations of such inbreeding without any detriment, before the breeding was discontinued.

A study on triple-knockout mice might suggest a unique capacity for inbred lines in certain conditions to not only epigenetically compensate for recessives traits, but induce a true reparation of the knocked out sequence by copying over from the template of healthy homologous loci, part of them (exon shuffling) from either another gene of the same gene family or pseudogenes, leading to a complete reversion of the symptoms in just 2 to 4 generations of inbreeding, justifying the researcher to label the phenomenon “inbreeding de-repression”. This could explain the bewildering speed of recovery of Templeton’s gazelles following his breeding method in such opposition to classical models that criticism invoke hidden improvements in husbandry, though without evidence, basically calling the researcher a liar, despite of similar speeds occuring regularly with inbred lines.

Classical epigenetics would not explain a complete reversion to health, and theoretically evolution should also not favor what amounts to hiding defects,. We must credit genetic redundancy with the increase in complexity of physiological adaptations to the environment, however its role is not to “absorb” bad mutations. Stringency regards to (some categories of) mutations is more beneficial to the species because of the risk of mutation runaway.

In résumé, the health effects of inbreeding depend on three things: the number and severity of recessive, deleterious mutations carried by individuals, the number of genes involved in co-adapted complexes, and those involved in heterotic selection.

Heterotic selection and HFC

The inbreeding coefficient F is not a good indicator of heterozygosity, even in known inbred populations and as we said earlier, populations to maintain very high heterozygosity after generations of inbreeding. That is to say, inbreeding depression can never be inferred from one or any number of genes: it has to be measured experimentally to develop a useful model for a particular species.

HFC studies focus on the role of “good” genetics in the resistance against infections, and the definition of that “good”. Naturally the whole field is mined because it is based on the postulate that pathogens are enemies inflicting losses and that they constitute the number 1 driver of selection on the molecular level on genes related to immunity, for all vertebrates. The problem is we see all sorts of cases that undermine that assumption, but it can not be questioned, no more than a theology professor could question the existence of God. Most HFCs determined are spurious and difficult to replicate.

They often come from large, outbred populations precluding any selection from taking place… or fail to take into account the pollution of the environment, both gross (fish in our rivers or close to the coast) and subtler. The ocean being polluted as they are we regard with heavy suspicion any conclusion based on fish, whether marine or in our rivers. In our opinion as far as resistance to diseases is concerned a strong HFC reported, if valid, testifies of a polluted environment. So yes, some studies do note a higher fitness for heterozygotes in presence of parasites, but their situation might very well mirror that of Africans dying of malaria and protected by sickle cell anemia.

They are sick, weak, and pollution (or impoverished diet) is inducing an artificial selection based on accidental, weak genetic-environmental interactions, leading to false conclusions that say more ourselves than evolution. In the end, what we want to see reveals a lot about our eagerness to see all interactions as aggression, despite the prevalence of asymptomatic infections. The mosquitoes that are carriers of Plasomadium and feed on humans 40% to 100% of times in Nigeria, even in places of extreme epidemic status (36.8% of people infected) have themselves a rate of infection with plasmodium of 0.5%. Which is to say, plasmodium barely registers in their radar, so how if we are 73 times more sensitive, maybe we have a problem.

Population heterozygote advantage (or genomic overdominance) is a different matter. The illusion of a superiority of heterozygotes can actually arise under a wide range of parameters in highly polymorphic contexts with parasite or multi-strains infections, even in presence of underdominance (lower fitness for heterozygotes), when heterozygotes as a collective group are more likely to bear dominant resistance alleles, giving an illusion of superiority over rarer homozygote groups.

Another cause of misdirection regards to inbreeding depression is the ignorance of the recent history of a population, making the population in question much more outbred than supposed, thus nullifying convenient conclusions of a high inbreeding depression and that the genetic purge hasn’t functioned, when what really happened is regular migrations from other populations constantly set back the purge. On the other hand studies show that most homozygote populations are healthy and do not suffer important mortality or catastrophic collapses following epidemics. Even the mainstream agrees on the lack of satisfaction (and falsifiable !) model to account for all this.

Cryptic female choice and mate selection have been noted to reduce the cost of inbreeding by selecting “immune compatible” sperm or mate in birds and fish, especially in relation to MHC genes. The rule seems to favor non-identical but somewhat related haplotypes but no criteria seems valid across the board, the preferences of some species do not copy to others.

Inbreeding may be imposed on species with low dispersion or limited to very small niches (and effective populations) but the same can not be said for birds flying halfway across the globe with the seasons yet return every year to meet and mate within their birth place, within the same birth group. So we should see (cryptic) female choice as natural checks and balances to either/or further a breeding strategy or compensate for eventual conseqeunces of ecological constraints, for exemple maintaining heterozygosity in some immune genes. Despite this, low polymorphism or monomorphism has usually in the wild very little effect on the health of wild population health.

Heterotic genes ought be rare: a favorable epistatic interaction between two homologous alleles is no different from an interaction between unrelated genes. It will still occur if said alleles happens to move in its own new gene: the heterozygote situation is excessively unstable and leads to the expansion of gene families through gene duplication or non-allelic recombination, because only a gene family will that combination to be stably inherited, instead of breaking at every meiosis. Heterozygote combinations compare to a test bench for evolution which regularly bundles together the winners.

Actual statistics

Cousin marriages have been scrutinized enough world-wide, concluding in a negligible excess mortality not warranting public concern. However for obvious reasons good statistics about true incest is rare. Only two meta-studies exist, which we interpret:. The first one includes a total of 213 children, stating a risk for adverse medical outcome in the offspring of incestuous unions in the wide range of 7 to 31% above population background, with a higher risk in the first year of life. The reason of this strangely variable rate is that half (138) of the sample came from Czechoslovakian in 1971 most either convicts or ex-convicts, with 8 clinical idiots, 13 chronic alcoholics, 4 who committed suicide after the disclosure of their incestuous relations, and 2 with syphilis. Additionally only six parents attended secondary school while the others stopped at elementary school. They might all as well be called “criminally insane”, far from a good representation of the average population. Considering on average 40% of intelligence is inheritable the result of mating of congenital idiots from the same family is unsurprisingly predictable. We choose to dismiss this study in favor of the second.

The second study However, over 226 inbred children and 115 couples, is the result of amateur efforts to see the truth in this matter with actually normal, functional parents, not in prison or sick or mentally ill, but on the contrary rather above average in education and conscientiousness. If we exclude common illnesses and learning disabilities (eg. ADHD) frequent in children of regular couples too, we get an excess risk of just 6.2% for the offspring of first generation incestuous children. 42 children were produced as second generation incestuous offspring, and 12 had common illnesses or learning disabilities, giving a somewhat expected figure of 28.6% risk compared to the norm. To be fair we should substract here too the same proportion of common illnesses and learning disabilities of 6.1% to account for the rate of occurrence in the general population, giving us 22.5% risk for second degree incest children.

And that is assuming this higher figure comes entirely from genetic causes and isn’t an artifice from the small sample. More importantly, it also assumes a higher susceptibility to “common” diseases such as allergies and autism are “bad”, but they likely are not. On the contrary, having regular (though not deadly) viral infections and allergies) and “fighting them off” usually indicates a healthy immune system and cleaner body. So if anything it could indicate a enhanced genetics instead of inbreeding depression, and those diseases might just disappear with an instinctive diet.

Eugenic plan

Eugenics of the past, German or otherwise, systematically failed to accomplish anything, because they lacked either or all:

While classical breeding methods have been efficient enough with animal husbandry for centuries, the time they take is not one human beings have the luxury of spending. People live longer lives than rats and cows, fortunately or unfortunately depending on who we consider. What works with animals, while already taking a lot of time, would take centuries in a small group of humans by ordinary means, and thousands if not more for millions of people, for an inferior result.

There are deeper reasons too. In all those thousands of years, there was never any hope for selective breeding or a caste system in the long run, as long the onslaught of denatured molecules and mutations would not cease, and the psychosexual instincts had not been restored. No matter how long it takes, 10 000, 3 000 or 100 years, spiritual or racial superiority would decay. Now, the rate of mutation and pollution of the land, ocean and food is accelerating exponentially with each century. Already today, diseases of tainted heredity are becoming more common, and this is not due to mixing. In a few centuries, it might be too late entirely, for either us or the planet. That is why simple eugenics has no future, and ultimately never had.

The instincts, body and food must heal all at once. While Nordic populations conserve most of Neanderthals’ big brains with their mean 1484 cc, we will never see again the full integrity of the ancestral Neandertal phenotype with their strength, bigger brains and extended lifespan returning without these efforts23.

So the first step is to combat admixture. Race mixing in all its forms is a genetic catastrophe:

So there is no other way, we must sort existing populations according to levels of purity: the race of tomorrow must descend only from the purest stock. Less pure Europeans and other foreign racial elements will enjoy all the rights and privileges of life in the new Europe, if they embrace sterilization25.

The most important criteria for selection must be the persistence of the striking European recessive traits of eye color, skin color and skull shape. But universally adaptive traits like reaction time or nerve transmission speed will also provide good proxies for intelligence, since school results are an indicator only as many tests and educational opportunities are standardized, which imply control for parental influences (and reform the entirety of the school system !) so that everyone gets the same training. They cannot be genuine indices of hereditary value before the educative conditions have been optimized and normalized in the whole (breeding) population otherwise one cannot tell apart nurture from nature. We should rather discaring IQ tests completely, for being mechanical and hardly involving any efforts unlike actual school work. This is shown by psychological studies.

No effort will be spared to develop the sciences of psychometry and neurology to an extreme degree: the scientific, objective measurement of physical indices of intelligence, brain efficiency or development. They are necessary by any multi-centuries breeding effort, in order to assess progress with objectivity and security.

Then, after (or while) the issue of admixture is dealt with, we will determine through experiments, analysis and visions what genetic sequences result from cooking’s mutagenic effect, or as consequence of the relaxation of selection (wheither natural, sexual or social). Then remove them with genetic engineering we will remove those sequences in the whole population.

I must straighten out that not every objectively undesirable trait can be singled out this way, even in the best case scenario. Not all undesirable mutations are lethal or sublethal: most of what separates us from Neanderthal belongs to the category of misadaptations, divergent mutations fixating and changing our whole ontogenesis (development along species-specific morphological lines), without reduced lethality, but causing weaker more fragile bodies and brains.

Once we reach total purification down to the ground level of chaotic errors (infinitely small in raw primates, compared to insects or bacteria), how to go further toward full recovery of the Neanderthal phenotype ? The intellect can not exceed itself by its own means. That said it is not about substituting ourselves to nature, but undoing entropy, to find our original state. And it is not our brain we shall use, but the divine intellect, the extrasensory. With its help, which is to say, that of God, and by subjecting our bodies and minds to the harshest difficulties, we may recreate ourselves.


  1. We do not support their alternative Out-Of-Africa theory, for starters due to its use mitochondrial DNA which we criticized, the fact it does not depend on their MGD theory and does not correlate at all with fossil evidences, instead defaulting to the old nonsensical multiregional sapiens origin that posit invisible Sapiens lineages in Europe and Asia completely unrelated to the actual hominid remains. Nor do we condone the author’s rabid Chinese chauvinism or dismissal of Lamarckian processes. We support the MGD theory because it makes a whole lot of logical sense, regardless of its application. ↩︎

  2. The full list of genes with fixed non-synonymous (missense) changes on the modern human lineage-i.e., genes where the ancestral (chimpanzee/Neandertal) allele has disappeared from all modern populations but is present in multiple Neandertal samples is: AHR, BOD1L1, C1orf159, C3, DNHD1, DNMT3L, FRMD8, OTUD5, PROM2, SHROOM4, SIX5, TBC1D3, ZNF106, CCDC82, CCDC144B, CCDC168, CHMP1A, DCHS1, FAM83G, GRM6, KNL1, LMNB2, NOTO, OPRM1, PDSS2, RFNG, SLC38A6, SUCLG2P4, TMPRSS7, ZNF510, ZNF516 with one fixed missense change, ADAM18, CASC5, SSH2, ZNHIT2 with two and SPAG5 with three. ↩︎

  3. It entails that all races, at least pure ones, from Africans to Australoids, should improve without cooking. It makes sense that racial differences in some aspects increased with cooking, when it comes to attributes that relied before on the spiritual space but have been reduced to purely organic, computations. Then bigger brains would now prevail where calculations could be forgone entirely before, drawing the answer from the extrasensor. ↩︎

  4. If the roots were longer, the pulp could not retreat as far because the dividing point in the root structure would be closer to the body. Since it is low-down in the lower jaw or high-up in the upper jaw, the pulp can retreat quite far as reparative dentin builds up. This means the taurodont tooth will last longer than the normal “cynodont” tooth which, by the way, means “dog-like”. Hillson said taurodontism was also found in modern man but it was a rare variant. Pinborg found it in less than 0.1% of modern humans. Stringer thought the shape of these roots is produced by “a delayed turning-in of the base of the roots” during their formation. He also thought this feature was related to the extreme wear endured by Neanderthal teeth, because teeth with undivided roots will maintain a whole chewing surface even when worn past the crown into the unseparated root area.

    Buried Alive
     ↩︎
  5. There are more people cured from hard drugs than weaned from cooking. According to drug addicts who did both, giving up cooking is significantly harder. While heroin withdrawal syndromes are physically harrowing, they are felt as external thus be combated, while the grip of cooking can reach much deeper, turning your whole value system, the source of your will altogether, upside down. ↩︎

  6. This ancestral state was characterized by conserved DNA repair mechanisms and other longevity-associated genes. Multiple transitions from deep, cold waters to warmer and shallower waters with a stronger predation, led to rapid phenotypic evolution. ↩︎

  7. Warmer waters may speed up metabolism, but some still live long in warm or relatively shallow waters (60 and over 100 years respectively). The cause of the degeneration is the higher predation of shallow waters: this kind of fish gets bigger and produce more eggs as they age, creating a positive selective pressure toward longer living individuals. Predators though ignore that and eat their share no matter what. This fact on top of the mulitple population expansion events that accompanied adaptation to new environments, led to an important radiation: the lack of purifying selection for longevity genes (because fish are eaten before old) led to their erosion by genetic drift. ↩︎

  8. The disposable soma theory posits that aging results from evolutionary trade-offs favoring reproduction over somatic maintenance. While this aligns with species-specific lifespan patterns, empirical support remains indirect.)) ↩︎

  9. And since everyone cooks, academics ignore raw food and there is but a single elderly person on the planet that has practiced with discipline instinctive raw paleo diet for the greatest part of his life (yet with a prior history of cancer, irradiations and later enprisonment in dire conditions), understandably quantifying that effect will stay out of reach for a long time. ↩︎

  10. The Danaids, the fifty daughters of Danaus, were condemned in the afterlife to fill a tub or vessel with water, but the vessel was full of holes or otherwise unable to retain the water, making the task eternally futile. No matter how much water they carried, it would constantly seep out, and the tub could never be filled. ↩︎

  11. The only important contradiction to his demonstration, is that all rocks weighing more than just a little above 300t (the Laterano obelisk of 323t) can not technically be attributed to Romans or otherwise recent people anymore than pyramids can to Egyptians↩︎

  12. The first real signs of Kim’s unique gifts surfaced when he was three. One day when he was browsing through the newspaper, he came to me and asked, “Dad, what does ‘con-fi-den-ti-al mean?” Without thinking, I jokingly told him to look it up in the family dictionary. He did. About 30 seconds after putting his head down, crawling like a snowplow over to the desk, and pulling himself up, he found the word and read out the definition. We started watching him more carefully after that. The clincher came when we found he could recite, verbatim, whole paragraphs from a book at the mere mention of a page number.

    idem
     ↩︎
  13. In addition, he seems to possess two separate, independent optical systems that enable him to read the left page of a book with his left eye while simultaneously reading the right page with his right eye. Normally, however, he uses both eyes together when focusing on hard data and details, or when scanning maps and telephone books. Showing signs of a rare form of dyslexia, Kim can also read a page that’s turned sideways or upside down. A few years ago[…] I realized that he can also read mirror images. Typically when using both eyes to simultaneously read two ordinary pages, he’ll take no more than 15 seconds to scan them, with near total recall. Not long ago during one of our flights, he read Tom Clancy’s The Hunt for Red October in an hour and 25 minutes. Four months later, when I asked him the name of the book’s Russian radio operator, Kim knew it. Then he referred me to the page that described the character and quoted several of its passages. Sometimes when he reads at home he also watches television. But he always adjusts it to the lowest possible, or on mute, and watches in Offering me up in complete silence. At least I can’t hear a thing. When I ask him how he knows what is said, he tells me he can hear what’s going on just fine. I assume he does.

    The Real rain man: A father’s inspiring account of Kim Peek, made famous by Oscar winner Dustin Hoffman (1996)
     ↩︎
  14. True siddhi exist, but more due to Tibetans’ famously extravagant sex life than their techniques: Tibet’s sexual mores, at least before the Chinese invasion, was extraordinarily liberated, if not degenerate, as could confirm the few Nazis that spent some time in the capital Lhassa. But the overlwhelming majority of practionners including masters, show no such thing. ↩︎

  15. They were also said to have intercourse with Satan, which had an advantage over all mortal men due to his… vibrating sex. Interesting ! ↩︎

  16. Commenting on the case, Dr. W. M. Krogman, Professor of Physical Anthropology at the University of Pennsylvania, points out that only at the very high temperature of 3000° F do bones even begin to fuse or melt, let alone disappear altogether. He tells how he has observed a body burn for eight hours in a crematorium at over 2000° F, “yet at the end of that time there was scarcely a bone that was not present and completely recognizable as a human bone… they were not ashes and powder as in the case of Mrs. Reeser and numerous other deaths by spontaneous combustion.”

    Stan Gooch The Origins of Psychic Phenomena

    To burn a body at an execution, for example, as much as two cart-loads of wood are required: and attempts by criminals to dispose of a body by fire are notoriously unsuccessful… this is a well-recognized medico-legal fact."

    Coroner Gavin Thurston Spontaneous Human Combustion
     ↩︎
  17. For a small price the few survivors gain a lasting cure for their depression and any unconscious death wish. It may be that the scathing experience either kills people or changes their life for the better, for a (relatively) price. ↩︎

  18. Incidentally, it might also modify growing embryos up to the 6th day as retro-transcriptional activity is still strong, justifying yet again their hypergamous tendency. ↩︎

  19. It is also true though not an excuse, taking the time to provide a rodent a diet decently close to a natural one is a nightmare. In any case the thought that biscuits or protein pellets might actually disturb the natural expression of an animal’s genetics is yet to cross someone’s mind. ↩︎

  20. In societies where brother-sister marriage is permitted in the ruling family, we may find excellence. Cleopatra was the offspring of brother-sister marriages continued through several generations and she was “not only handsome, vigorous, intellectual, but also prolific… as perfect a specimen of the human race as could be found in any age or class of society”.

    Leslie White Inbreeding, Incest, And The Incest Taboo, (P. Wolf) (2014)
     ↩︎
  21. It goes doubly for historically small populations that have only ancestral inbreeding with respect to the founder population but actual cousins do not marry: then the purge can only act on ancestral genes, and new mutations can not be edited out quickly. ↩︎

  22. There are of course cases of really too inbred populations, caused by a recent history of excessive human hunting (or environmental destruction) causing bottleneck events and the disastrous fixation of deleterious mutations ↩︎

  23. If, throughout a period of not more than six hundred years, all physically degenerate or mentally defective persons were sterilized, humanity would not only be delivered from an immense misfortune, but also restored to a state of general health such as we at present hardly imagine.

    Adolf Hitler Mein Kampf
     ↩︎
  24. Further investigation of SNPs in LD with rs2395029 revealed us the presence of a previously unknown large haploblock spanning 1.9 megabases (MB), strongly associated with HLA-B_57:01_ in individuals of European descent. The identification of this haploblock within the major histocompatibility complex (MHC) region raises broader questions for future research, such as why the haploblock in Europeans [1.9 MB] larger than that in Africans.

    The HLA-B57:01 allele corresponds to a very large MHC haploblock likely explaining its massive effect for HIV-1 elite control (2023)
     ↩︎
  25. It is my duty to inform you that you have failed to entirely measure up to the standards of the pure human genotype. You have two options: exile from the Fatherland forever or sterilization. Which do you choose? The fellow hesitated a moment; Ferric spied tears in his eyes. Then suddenly Feric’s presence was noted and everyone—SS men and sour-faced inmates alike—snapped out Party salutes and shouted Hail Jaggar! with a vigor and enthusiasm that left nothing to be desired. Feric was deeply touched by such a demonstration of racial solidarity, coming as it did from those called upon to sacrifice their hope of future progeny for the good of the Fatherland. A moment later, the Holder at the front of the line squared his shoulders, clicked his heels, came to attention and replied to the SS major clearly and firmly: “I choose sterilization for the good of the Fatherland!”

    Norman Spinrad Iron Dream
     ↩︎