We’re surrounded by advertising. Every single day, no matter where we are, thousands of auditive & visual stimuli are force-fed to our brains. \* On the internet Banners, pop-ups, sponsored articles, video pre-rolls, audio interruptions, spam emails, website background takeovers, push-notifications, entry overlays, chat-bubbles, product placements, … \* In real life Subway billboards, bus panels, stadium names, street flyers, sprayed sidewalk ads, human signs, branded goodies and storefronts, toilet ads, car stickers, blinking signs, audio announcements… The list goes on and on as the space dedicated to advertising seems to have continuously increased whilst means of avoidance were getting weaker.
Surprised by how little I knew about the effects of this continuous unsolicited stimulation, I decided to look into available research.
Anywhere the eye can see, it’s likely to see an ad.
There’s no scientific consensus on the number of ads we’re exposed to daily, as estimates vary from a few hundreds to thousands. Why is it so hard to get a reasonable figure? Because it depends on a variety of factors that greatly affect the final result (sorted by level of importance):
1. What is considered an ad? Including brand labels and logos can increase 10x the final result. Think about every time you pass by a brand name in a supermarket, the label on everything you wear, the condiments in your fringe, the cars on the highway… 2. Where does the subject live? The denser your living environment, the more ads you’re exposed to as companies fiercely compete for your attention (and, ultimately, your wallet). Visual pollution is one of the drawbacks of living in big city… 3. What is the subject’s job? During work hours, a hotel receptionist sees a lot less ads than a truck driver which is less exposed than a social media manager.
Moreover, being in proximity to an ad doesn’t mean actually seeing it. Because our brain can’t process the hundreds of signals sent at us every hour; we learned to unconsciously ignore most advertising messages. That’s precisely why cognitive experts have developed a “scale of impact” for ads: \* Brand exposure | ~5000 per day A brand name or logo is within viewing or hearing distance from the subject (i.e. it could have been seen or heard). \* Ad Exposure | ~350 per day An ad is within viewing or hearing distance from the subject. \* Ad Perception | ~150 per day An ad attracted full attention for a few seconds or more. \* Ad Awareness | ~90 per day An ad was interpreted by the subject who mentally processed its content. \* Ad Engagement | ~10 per day An ad made an impression on the subject, which is now emotionally motivated to investigate the product or service. So we’re exposed to hundreds of ads per day. But at the same time, we developed an unconscious screening process that’s very efficient at reducing both the intensity and duration of the attention we dedicate to commercials.
Each day, less than 25% of ads we’re exposed to make it past our brain’s “attention wall”: so where’s the problem?
Our decisions suffer from multiple cognitive biases.
As an individual, I like to think of myself as “generally rational”: most of the time I manage to stay calm, think before acting, and try to step back before making impactful decisions.
In terms of buying behavior, aside from the exceptional splurge, I always try to get quality goods at a reasonable price. When I buy luxury or high-end stuff, I rationalize by saying quality is always pricey. An economist would say that I attempt to maximise the utility I get from every purchase.
The thing is, recent advances in psychology, social science & cognitive research have demonstrated that humans’ decisions are far less conscious that we thought they were. There is increasing experimental evidence for the effectiveness of advertising in influencing people’s choices without their conscious awareness.
The most famous illustration of that is the Pepsi paradox: in blind tastings, Pepsi is quasi-systematically preferred, but Coke continues to be the absolute bestseller. It’s the triumph of branding over taste, as the mere presence of brand labels leads people to switch their opinion.
But how does that work?
Everyday, we generate and make use of millions of unconscious associations.
Think of what happens when you enter a room you’ve never been in before: in a glance, you instantly identify what the different items are. Without thinking, you know that the glass container on the table is a cup meant to help you drink, that the table is a level surface to put things on, that the plastic switch near the door will turn the lights on, that the black flat square on the wall is an LCD TV and that the other small screen on the desk is a personal computer. This process is implicit: it’s automatic, uncontrollable, and operates non-consciously.
Here’s another example: “please, don’t think of a basketball game”. There you go: instantly and without perceivable effort, you thought about tall black players, an orange spherical ball with black ribs, a red hoop with a net, big foam hands, a transparent backboard, a rectangular floor made of wood, the “Kiss cam”, sneakers with high tops, Lebron James, etc.
Our ability to quickly and effortlessly form associations and categorize items is a truly remarkable skill. Because it’s innate and shared among humans, we fail to notice or realize how exceptional it is. On the contrary, AI developers and robotic engineers — because they’re struggling to teach this aptitude to machines — know how singular it is. We’re actually not that good at solving complex math equations or intricate logic problems: what really sets us appart is our capacity to efficiently generate and make use of a seemingly infinite number of associations.
Each of these unconscious associations is tied to positive or negative emotions, depending on your personal experience. This is called valence: the intrinsic attractiveness or averseness of an event, object or situation. For instance, some people consider Halloween a cheerful event, while others find it rather depressing or annoying. Same goes for birthdays, clowns, thanksgivings, etc.
What does that have to do with advertising? Well, ads are designed to create those associations in our mind.
Some of them provide valuable information.
Marketing practitioners often say that the role of advertising is to provide information that enables people to make better choices. This is indeed the case for a certain proportion of ads that serve the purpose of: \* Raising awareness “Hey! My product exists, here’s how it works and where to buy it.” Mariah just released a new single, it’s modestly called “Infinity” and it’s available on iTunes \* Persuading the audience “Hey! My product is great because it has these awesome characteristics/rating/appraisals.” Dentists apparently prefer to use Oral-B over its competitors. \* Making promises “Hey! My product will help you achieve this/experience that.” Use Slack, and you’ll be “32% more productive”. Whatever that means.
These type of ads convey valuable information that is helpful to the consumer’s decision-making process. The problem is a lot ads don’t work that way. In fact, the majority of commercials are meant to influence people through unconscious processes about which they’re unaware.
Ads’ are efficient because they rely on implicit mechanisms. “Most advertising influences behaviour not through the conscious processing of verbal or factual messages, but by mediating relationships between the consumer and the brand — and it does this using types of communication that are not necessarily processed with conscious attention.”
— Paul Feldwick, former Executive Planning Director BMP DDB agency Think about alcoholic drinks, perfumes, watches, jewellery, cigarettes, sodas, haute couture or energy drinks: their ads rarely tell you about the quality of ingredients, the resistance of materials, the reliability of mechanics or the outcome they’ll produce. Exposing product features or making rational arguments is not needed here as something else is at play.
Generate strong positive associations with a product.
Coke = good times
Evaluative Conditioning (EC) is one of the simplest and most known conditioning mechanism: pair things in hopes that the positive or negative associations of one will rub off onto the other. It is the reason why so many brands rely on cute animals (Coca-Cola’s polar bears), celebrity endorsements (Pepsi’s Beyonce spots), jaw-dropping landscapes (nearly all car commercials) or hot girls (think beer or perfume) in their ads.
The premise is that your product will become more attractive if it’s positioned alongside something or someone people love. I’ve personally always doubted this theory as it seems way too “dumb” to be true. I really can’t get around the fact that rational, educated adults could be influenced simply by repeatedly placing cute bunnies or muscular man alongside wet wipes or protein shakes. The thing is, research proves me wrong.
In 2012, a team from INSEAD & Tübingen university decided to investigate whether the enduring success of EC could be caused by automatic response. They ran 6 experiments in which a neutral image — human faces or product logos — was paired with something either pleasant (beautiful scenery, people having a fun day out) or unpleasant (graveyards, cockroaches).
Then, they asked participants their opinion about the face or logo they’ve just seen. For a response to be considered uncontrollable, it should appear even when the mind is occupied with something totally different or when the subject made a strongly motivated attempt to repress his natural impulse.
That is why, before the experiment began, some participants were explicitly asked to contradict the visual cues (by liking images paired with nasty stuff and disliking those paired with cool things) and/or to memorize four-digit numbers. A group was even promised a €20 payout for participants who succeeded best at contradicting the visual nudges.
Overall, the studies unveiled strong statistical evidence of automaticity: even with a conscious, motivated effort to resist, participants’ opinions were still skewed by the visual associations presented to them. As obvious and unsubtle the ads may seem, they still manage to overcome our attempts at rational thinking.
Advertise products as an extension of ourselves.
Be iconic: buy an industrially made scented liquid that’s sold everywhere, at 100 times the production costs.
When buying something, we integrate the brand’s associations into our perception of ourselves. In other words, the act of buying is not solely about functional utility but also about what we think of ourselves and how we want to be perceived.
Because products we use and experiences we live form an important part of our identity and self esteem, our buying preferences are largely influenced by “irrational” messages conveyed through ads. A few examples: \* Wristwatches Why would you wear a 10 000$ Rolex instead of a 20$ Swatch or Casio? They’re all durable, waterproof and accurate watches. The Casio even has additional features such as backlight and alarm… \* Wine, Vodka or Champagne bottles Can you taste the difference between a 100$ vintage Moët and a 20$ Mumm sparkling wine? Or between Grey Goose and Absolut vodkas? A recent macro-study of 6000+ blind tastings showed that on average, people enjoy more expensive wines slightly less. \* Handbags A Hermes leather bag costs roughly 6000$, a Michel Kors 500$ and a Zara one 100$. Is 1 Hermes bag really worth 60 Zara ones? \* Diamond rings Moissanite is an excellent substitute to diamond: a highly durable, flawless stone with an intense sparkle. To the untrained eye, the only distinguishable characteristic is the 10x price difference. Would you buy a Moissanite ring to your lover? \* Perfumes An old women would never buy a Guess perfume while young girls rarely put on Lancôme or Chanel. Why is that? \* Sunglasses (Prada vs Warby Parker), Vacuum cleaners (Dyson vs Hoover), Deodorants (Dove vs Axe), Clothes (Abercombie vs Gap), … Products are instrumental to our sense of self: because we have a certain ideal about ourselves and we care about what people think, we choose to buy stuff that convey our values and aspirations. When brands carry associations, buying preferences deliver a message about the consumer: “Because I consider myself a virile male, I won’t buy a Dove deodorant” / “I want to be an intellectual, so I’ll buy a Moleskine notebook” / “I’m a classy guy who wears Hugo Boss suits”, etc.
Ads, being both pervasive and abundant, are a serious threat to our well-being.
Starting from the 1950’s, industry professionals have been very efficient at reducing regulatory pressure and increasing ads’ public imprint: advertisement, they say, is a harmless “mirror of cultural values” that simply “redistributes consumption” by “promoting choice”. The result? Today, it’s virtually impossible to opt-out from exposure.
The problem is, you don’t have to dig a long time to find consistent evidence that ads increase overall spending and normalise certain behaviours by influencing individuals on a subconscious level. Do you really think that 550 billions of dollars are spent each year on something that doesn’t work? “Today’s best and brightest graduates in psychology and cognitive science are snapped up by the advertising industry because they want to know how best to manipulate us. The truth none of us wants to admit is that advertisers know our minds better than we do.”
— Clive Hamilton, Centre for Applied Philosophy and Public Ethics Advertising has an access-all-areas pass in today’s society, a pass of which the industry is taking full advantage to seed superficial needs and consumerist thoughts in our minds. Because of its omnipresence and effectiveness, advertising should be considered a public issue as it constrains our ability to solve social and environmental problems.
How can we imagine a better world when selfish, unnecessary cravings are planted in our brains by a constant influx of highly engineered marketing messages? How can we fight anxiety, when everything’s pushing us into social competition? How can we stop over-consumption and reduce waste when we’re told hundreds of time per day to buy stuff?
It’s time for a change.
My mom won’t be the only one reading this.
I’ve learned most of what I know through the writings of others. Having people take some of their time to read my work means a lot to me.
If you found this story interesting, feel free to clap 👏 If you can’t get enough, follow me on Twitter, Instagram or Medium 🤗
Physicists assemble the LUX (Large Underground Xenon) detector, which was one of the world’s most sensitive searches for the direct detection of dark matter particles. When in place inside the Homestake mine, the liquid-xenon-filled capsule hoped to detect three or four particles of dark matter a year. It wound up detecting zero. (John B. Carnett/Bonnier Corporation via Getty Images)
You can’t get mad at a team for trying the improbable, hoping that nature cooperates. Some of the most famous discoveries of all time have come about thanks to nothing more than mere serendipity, and so if we can test something at low-cost with an insanely high reward, we tend to go for it. Believe it or not, that’s the mindset that’s driving the direct searches for dark matter.
In order to understand how to find dark matter, however, you have to first understand what we know so far, and what the evidence points to as far as direct detection goes. We haven’t found it yet, but that’s okay. Not finding dark matter in an experiment is not evidence that dark matter doesn’t exist. The indirect evidence all shows that it’s real. The question before us is how to demonstrate its reality, hopefully by finding the particle responsible for it directly.
The particles and antiparticles of the Standard Model of particle physics are exactly in line with what experiments require, with only massive neutrinos providing a difficulty and requiring beyond-the-standard-model physics. Dark matter, whatever it is, cannot be any one of these particles, nor can it be a composite of these particles. (E. SIEGEL / BEYOND THE GALAXY)
Let’s begin with a basic recap of dark matter: the idea, the motivation, the observations, the theory and then we’ll talk about the hunt.
The idea. You know the basics: there are all the protons, neutrons and electrons that make up our bodies, our planet and all the matter we’re familiar with, as well as some photons (light, radiation, etc.) thrown in there for good measure. Protons and neutrons can be broken up into even more fundamental particles — the quarks and gluons — and along with the other Standard Model particles, make up all the known matter in the Universe.
The big idea of dark matter is that there’s something other than these known particles contributing in a significant way to the total amounts of matter in the Universe. Why would we think such a thing?
The two bright, large galaxies at the center of the Coma Cluster, NGC 4889 (left) and the slightly smaller NGC 4874 (right), each exceed a million light years in size. But the galaxies on the outskirts, zipping around so rapidly, points to the existence of a large halo of dark matter throughout the entire cluster. (ADAM BLOCK/MOUNT LEMMON SKYCENTER/UNIVERSITY OF ARIZONA)
The motivation. We know how stars work, and we know how gravity works. If we look at galaxies, clusters of galaxies and go all the way up to the largest-scale structures in the Universe, we can extrapolate two things. One: how much mass there is in these structures at every level. We look at the motions of these objects, we look at the gravitational rules that govern orbiting bodies, whether something is bound or not, how it rotates, how structure forms, etc., and we get a number for how much matter there has to be in there. Two: we know how stars work, so as long as we can measure the starlight coming from these objects, we can know how much mass is there in stars.
These two numbers don’t match, and they don’t match spectacularly. There had to be something more than just stars responsible for the vast majority of mass in the Universe. This is true for the stars within individual galaxies of all sizes all the way up to the largest clusters of thousands of galaxies in the Universe.
The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. The Universe is 75–76% hydrogen, 24–25% helium, a little bit of deuterium and helium-3, and a trace amount of lithium by mass. After tritium and beryllium decay away, this is what we’re left with, and this remains unchanged until stars form. Only about 1/6th of the Universe’s matter can be in the form of this normal (baryonic, or atom-like) matter. (NASA / WMAP SCIENCE TEAM)
The observations. This is where it gets fun, because there are a ton of them; I’ll focus on just three. When we extrapolate the laws of physics all the way back to the earliest times in the Universe, we find that there was not only a time so early when the Universe was hot enough that neutral atoms couldn’t form, but there was a time where even nuclei couldn’t form! The formation of the first elements in the Universe after the Big Bang — due to Big Bang Nucleosynthesis — tells us with very, very small errors how much total “normal matter” is there in the Universe. Although there is significantly more than what’s around in stars, it’s only about one-sixth of the total amount of matter we know is there.
The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s. This image encodes a huge amount of information about the early Universe, including its composition, age, and history. The fluctuations are only tens to hundreds of microkelvin in magnitude, but definitively point to the existence of both normal and dark matter in a 1:5 ratio. (ESA AND THE PLANCK COLLABORATION)
The fluctuations in the cosmic microwave background are particularly interesting. They tell us what fraction of the Universe is in the form of normal (protons+neutrons+electrons) matter, what fraction is in radiation, and what fraction is in non-normal, or dark matter, among other things. Again, they give us that same ratio: that dark matter is about five-sixths of all the matter in the Universe.
The observations of baryon acoustic oscillations in the magnitude where they’re seen, on large scales, indicate that the Universe is made of mostly dark matter, with only a small percentage of normal matter causing these ‘wiggles’ in the graph above. (MICHAEL KUHLEN, MARK VOGELSBERGER, AND RAUL ANGULO)
And finally, there’s how structure forms on the largest scales. This is particularly important, because we can not only see the ratio of normal-to-dark matter in the magnitude of the wiggles in the graph above, but we can tell that the dark matter is cold, or moving below a certain speed even when the Universe is very young. This pieces of knowledge lead to outstanding, precise theoretical predictions.
According to models and simulations, all galaxies should be embedded in dark matter halos, whose densities peak at the galactic centers. On long enough timescales, of perhaps a billion years, a single dark matter particle from the outskirts of the halo will complete one orbit. The effects of gas, feedback, star formation, supernovae, and radiation all complicate this environment, making it extremely difficult to extract universal dark matter predictions.(NASA, ESA, AND T. BROWN AND J. TUMLINSON (STSCI))
The theory. This tells us that around every galaxy and cluster of galaxies, there should be an extremely large, diffuse halo of dark matter. This dark matter should have practically no “collisions” with normal matter — upper limits indicate that it would take light-years of solid lead for a dark matter particle to have a 50/50 shot of interacting just once — there should be plenty of dark matter particles passing undetected through Earth, me and you every second, and dark matter should also not collide or interact with itself, the way normal matter does.
There are some indirect ways of detecting this: the first is to study what’s called gravitational lensing.
When there are bright, massive galaxies in the background of a cluster, their light will get stretched, magnified and distorted due to the general relativistic effects known as gravitational lensing. (NASA, ESA, AND JOHAN RICHARD (CALTECH, USA) ACKNOWLEDGEMENT: DAVIDE DE MARTIN & JAMES LONG (ESA / HUBBLE)NASA, ESA, AND J. LOTZ AND THE HFF TEAM, STSCI)
By looking at how the background light gets distorted by the presence of intervening mass (solely from the laws of general relativity), we can reconstruct how much mass is in that object. There’s got to be dark matter in there, but from looking at colliding clusters of galaxies, we learn something even more profound.
The gravitational lensing map (blue), overlayed over the optical and X-ray (pink) data of the Bullet cluster. The mismatch of the locations of the X-rays and the inferred mass is undeniable. (X-RAY: NASA/CXC/CFA/M.MARKEVITCH ET AL.; LENSING MAP: NASA/STSCI; ESO WFI; MAGELLAN/U.ARIZONA/D.CLOWE ET AL.; OPTICAL: NASA/STSCI; MAGELLAN/U.ARIZONA/D.CLOWE ET AL.)
The dark matter really does pass right through one another, and accounts for the vast majority of the mass; the normal matter in the form of gas creates shocks (in X-ray/pink, above), and only accounts for some 15% of the total mass in there. In other words, about five-sixths of that mass is dark matter! By looking at colliding galaxy clusters and monitoring how both the observable matter and the total gravitational mass behaves, we can come up with an astrophysical, empirical proof for the existence of dark matter.
But that’s indirect; we know there’s supposed to be a particle associated with it, and that’s what the hunt is all about.
If dark matter does have a self-interaction, its cross-section is tremendously low, as direct detection experiments have shown. It also doesn’t scatter very much off of nuclei. (Mirabolfathi, Nader arXiv:1308.0044 [astro-ph.IM])
The hunt. This is the great hope: for direct detection. Because we don’t know what’s beyond the standard model — we’ve never discovered a single particle not encompassed by it — we don’t know what dark matter’s particle (or particles) properties should be, should look like, or how to find it. We don’t even know if it’s all one thing, or if it’s made up of a variety of different particles.
So we look at what we’d be able to detect instead, and look there. We can look for interactions down to a certain cross-section, but no lower. We can look for energy recoils down to a certain minimum energy, but no lower. And at some point, experimental limitations — natural radioactivity, cosmic neutrons, solar/cosmic neutrinos, etc. — make it impossible to extract a signal below a certain threshold.
Hall B of LNGS with XENON installations, with the detector installed inside the large water shield. If there’s any non-zero cross section between dark matter and normal matter, not only will an experiment like this have a chance at detecting dark matter directly, but there’s a chance that dark matter will eventually interact with your human body. (INFN)
Long story short: the latest experiment to search for dark matter directly didn’t find it, at least not yet. That’s been the story for every direct detection experiment ever performed, confirmed, and robustly tested, over and over again.
And that’s okay! Unless dark matter happens to be of a certain mass with a certain interaction cross-section, none of the designed experiments are going to see it. That doesn’t mean dark matter isn’t real, it just means that dark matter is something else than what our experiments are optimized to find.
The cryogenic setup of one of the experiments looking to exploit the hypothetical interactions between dark matter and electromagnetism. Yet if dark matter doesn’t have specific properties that current experiments are testing for, none of the ones we’ve even imagined will ever see it directly. (AXION DARK MATTER EXPERIMENT (ADMX) / LLNL’S FLICKR)
So we keep looking, we keep thinking of new possibilities for what it could be, and we keep thinking of new ways to search for it. That’s what science at the frontiers is like. Personally, I don’t expect these direct detection attempts to be successful; we’re stabbing in the dark hoping we hit something, and there are little-to-no good reasons for dark matter to be in these ranges. But it’s what we could see, so we go for it. If we find it, Nobel Prizes and new physics discoveries for everyone, and if we don’t, we know a little more about where the new physics isn’t. But just as you shouldn’t fall for the hyper-sensationalized claims that dark matter has been directly detected, you shouldn’t fall for the ones that say “there’s no dark matter” because a direct detection experiment failed.
We are after the most fundamental stuff in the Universe, and we’ve only recently begun to understand it. It shouldn’t be a surprise if the search takes a little — or even a lot — longer. In the meantime, the journey for knowledge and understanding of just what it is that holds the Universe together continues.
Astronomers have finally found the last of the missing universe. It’s been hiding since the mid-1990s, when researchers decided to inventory all the “ordinary” matter in the cosmos — stars and planets and gas, anything made out of atomic parts. (This isn’t “dark matter,” which remains a wholly separate enigma.) They had a pretty good idea of how much should be out there, based on theoretical studies of how matter was created during the Big Bang. Studies of the cosmic microwave background (CMB) — the leftover light from the Big Bang — would confirm these initial estimates.
So they added up all the matter they could see — stars and gas clouds and the like, all the so-called baryons. They were able to account for only about 10 percent of what there should be. And when they considered that ordinary matter makes up only 15 percent of all matter in the universe — dark matter makes up the rest — they had only inventoried a mere 1.5 percent of all matter in the universe.
Now, in a series of three recent papers, astronomers have identified the final chunks of all the ordinary matter in the universe. (They are still deeply perplexed as to what makes up dark matter.) And despite the fact that it took so long to identify it all, researchers spotted it right where they had expected it to be all along: in extensive tendrils of hot gas that span the otherwise empty chasms between galaxies, more properly known as the warm-hot intergalactic medium, or WHIM.
A Million-Galaxy Stack
Early indications that there might be extensive spans of effectively invisible gas between galaxies came from computer simulations done in 1998. “We wanted to see what was happening to all the gas in the universe,” said Jeremiah Ostriker, a cosmologist at Princeton University who constructed one of those simulations along with his colleague Renyue Cen. The two ran simulations of gas movements in the universe acted on by gravity, light, supernova explosions and all the forces that move matter in space. “We concluded that the gas will accumulate in filaments that should be detectable,” he said.
Except they weren’t — not yet.
“It was clear from the early days of cosmological simulations that many of the baryons would be in a hot, diffuse form — not in galaxies,” said Ian McCarthy, an astrophysicist at Liverpool John Moores University. Astronomers expected these hot baryons to conform to a cosmic superstructure, one made of invisible dark matter, that spanned the immense voids between galaxies. The gravitational force of the dark matter would pull gas toward it and heat the gas up to millions of degrees. Unfortunately, hot, diffuse gas is extremely difficult to find.
A number of research teams searched for this gas, finding bits of the missing matter along the way. By 2014, astronomers had identified around 70 percent of it. But 30 percent was still missing.
To spot the hidden filaments, two independent teams of researchers searched for precise distortions in the CMB, the afterglow of the Big Bang. As that light from the early universe streams across the cosmos, it can be affected by the regions that it’s passing through. In particular, the electrons in hot, ionized gas (such as the WHIM) should interact with photons from the CMB in a way that imparts some additional energy to those photons. The CMB’s spectrum should get distorted.
Unfortunately the best maps of the CMB (provided by the Planck satellite) showed no such distortions. Either the gas wasn’t there, or the effect was too subtle to show up.
But the two teams of researchers were determined to make them visible. From increasingly detailed computer simulations of the universe, they knew that gas should stretch between massive galaxies like cobwebs across a windowsill. Planck wasn’t able to see the gas between any single pair of galaxies. So the researchers figured out a way to multiply the faint signal by a million.
First, the scientists looked through catalogs of known galaxies to find appropriate galaxy pairs — galaxies that were sufficiently massive, and that were at the right distance apart, to produce a relatively thick cobweb of gas between them. Then the astrophysicists went back to the Planck data, identified where each pair of galaxies was located, and then essentially cut out that region of the sky using digital scissors. With over a million clippings in hand (in the case of the study led by Anna de Graaff, a Ph.D. student at the University of Edinburgh), they rotated each one and zoomed it in or out so that all the pairs of galaxies appeared to be in the same position. They then stacked a million galaxy pairs on top of one another. (A group led by Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, combined 260,000 pairs of galaxies.) At last, the individual threads — ghostly filaments of diffuse hot gas — suddenly became visible.
A group of physicists are questioning our understanding of how quarks - a type of elementary particle - arrange themselves under extreme conditions. And their quest is revealing that elements beyond the edge of the periodic table might be fair weirde
A group of physicists are questioning our understanding of how quarks - a type of elementary particle - arrange themselves under extreme conditions. And their quest is revealing that elements beyond the edge of the periodic table might be fair weirde
One of the most often talked about, but least understood, metrics in our industry is the concept of “data durability.” It is often talked about in that nearly everyone quotes some number of nines, and it is least understood in that no one tells you how they actually computed the number or what they actually mean by it.
It strikes us as odd that so much of the world depends on the concept of RAID and Encodings, but the calculations are not standard or agreed upon. Different web calculators allow you to input some variables but not the correct or most important variables. In almost all cases, they obscure the math behind how they spit out their final numbers. There are a few research papers, but hardly a consensus. There just doesn’t seem to be an agreed upon standard calculation of how many “9s” are in the final result. We’d like to change that.
In the same spirit of transparency that leads us to publish our hard drive performance stats, open source our Reed-Solomon Erasure Code, and generally try to share as much of our underlying architecture as practical, we’d like to share our calculations for the durability of data stored with us.
We are doing this for two reasons:
1. We believe that sharing, where practical, furthers innovation in the community. 2. Transparency breeds trust. We’re in the business of asking customers to trust us with their data. It seems reasonable to demonstrate why we’re worthy of your trust.
11 Nines Data Durability for Backblaze B2 Cloud Storage
At the end of the day, the technical answer is “11 nines.” That’s 99.999999999%. Conceptually, if you store 1 million objects in B2 for 10 million years, you would expect to lose 1 file. There’s a higher likelihood of an asteroid destroying Earth within a million years, but that is something we’ll get to at the end of the post.
How to Calculate Data Durability
Amazon’s CTO put forth the X million objects over Y million years metaphor in a blog post. That’s a good way to think about it — customers want to know that their data is safe and secure.
When you send us a file or object, it is actually broken up into 20 pieces (“shards”). The shards overlap so that the original file can be reconstructed from any combination of any 17 of the original 20 pieces. We then store those pieces on different drives that sit in different physical places (we call those 20 drives a “tome”) to minimize the possibility of data loss. When one drive fails, we have processes in place to “rebuild” the data for that drive. So, to lose a file, we have to have four drives fail before we had a chance to rebuild the first one.
The math on calculating all this is extremely complex. Making it even more interesting, we debate internally whether the proper calculation methodology is to use the Poisson distribution (the probability of continuous events occurring) or Binomial (the probability of discrete events). We spent a shocking amount of time debating this and believe that both arguments have merits. Rather than posit one absolute truth, we decided to publish the results of both calculations (spoiler alert: Either methodology tells you that your files are safe with Backblaze).
The math is difficult to follow unless you have some facility with advanced statistics. We’ll forgive you if you want to skip the sections entirely, just click here.
When dealing with the probability of X number of events occuring in a fixed period of time, a good place to start is the Poisson distribution.^
For inputs, we use the following assumptions:^
The average rebuild time to achieve complete parity for any given B2 object with a failed drive is 6.5 days. A given file uploaded to Backblaze is split into 20 “shards” or pieces. The shards are distributed across multiple drives in a way that any drive can fail and the file is fully recoverable — a file is not lost unless four drives were to fail in a given vault before they could be “rebuilt.” This rebuild is enabled through our Reed-Solomon Erasure Code. Once one drive fails, the other shards are used to “rebuild” the data on the original drive (creating, for all practical purposes, an exact clone of the original drive). The rule of thumb we use is that for every 1 TB needed to be rebuilt, one should allow 1 day. So a 12 TB drive would, on average, be rebuilt after 12 days. In practice, that number may vary based on a variety of factors, including, but not limited to, our team attempting to clone the failed drive before starting the rebuild process. Based on whatever else may be happening at a given time, a single failed drive may also not be addressed for one day. (Remember, a single drive failure has a dramatically different implication than a hypothetical third drive failure within a given vault — different situations would call for different operational protocols.) For the purposes of this calculation, and a desire to provide simplicity where possible, we assumed an average of a one day lag time before we start the rebuild. The annualized failure rate of a drive is 0.81%. For the trailing 60 days while we were writing this post, our average drive failure rate was 0.81%. Long time readers of our blog will also note that hard drive failure rates in our environment have fluctuated over time. But we also factor in the availability of data recovery services including, but not limited to, those offered by our friends at DriveSavers. We estimate a 50% likelihood of full (100%) data recovery from a failed drive that’s sent to DriveSavers. That cuts the effective failure rate in half to 0.41%.
For our Poisson calculation, we use this formula:
The values for the variables are: \* Annual average failure rate = 0.0041 per drive per year on average \* Interval or “period” = 156 hours (6.5 days) * Lambda = ((0.0041 * 20)/((365*24)/156)) =0.00146027397 for every “interval or period” \* e = 2.7182818284 \* k = 4 (we want to know the probability of 4 “events” during this 156 hour interval) Here’s what it looks like:
Poisson calculation enumerated
If you’re following along at home, type this into an infinite precision calculator:^
The sub result for 4 simultaneous drive failures in 156 hours = 1.89187284e-13. That means the probability of it NOT happening in 156 hours is (1 – 1.89187284e-13) which equals 0.999999999999810812715 (12 nines).
But there’s a “gotcha.” You actually should calculate the probability of it not happening by considering that there are 56 “156 hour intervals” in a given year. That calculation is:
Yes, while this post claims that Backblaze achieves 11 nines worth of durability, at least one of our internal calculations comes out to 12 nines. Why go with 11 and not 12?
1. There are different methodologies to calculate the number, so we are publishing the most conservative result. 2. It doesn’t matter (skip to the end of this post for more on that).
For those interested in getting into the full detail of this calculation, we made a public repository on GitHub. It’s our view on how to calculate the durability of data stored with erasure coding, assuming a failure rate for each shard, and independent failures for each shard.
First, some naming. We will use these names in the calculations: \* S is the total number of shards (data plus parity) \* R is the repair time for a shard in days: how long it takes to replace a shard after it fails \* A is the annual failure rate of one shard \* F is the failure rate of a shard in R days \* P is the probability of a shard failing at least once in R days \* D is the durability of data over R days: not too many shards are lost With erasure coding, your data remains intact as long as you don’t lose more shards than there are parity shards. If you do lose more, there is no way to recover the data.
One of the assumptions we make is that it takes R days to repair a failed shard. Let’s start with a simpler problem and look at the data durability over a period of R days. For a data loss to happen in this time period, P+1 shards (or more) would have to fail.
We will use A to denote the annual failure rate of individual shards. Over one year, the chances that a shard will fail is evenly distributed over all of the R-day periods in the year. We will use F to denote the failure rate of one shard in an R-day period:
The probability of failure of a single shard in R days is approximately F, when F is small. The exact value, from the Poisson distribution is:
Given the probability of one shard failing, we can use the binomial distribution’s probability mass function to calculate the probability of exactly n of the S shards failing:
We also lose data if more than n shards fail in the period. To include those, we can sum the above formula for n through S shards, to get the probability of data loss in R days:
The durability in each period is inverse of that:
Durability over the full year happens when there’s durability in all of the periods, which is the product of probabilities:
And that’s the answer!
For the full calculation and explanation, including our Python code, please visit the GitHub repo:
For anyone in the data business, durability and reliability are very serious issues. Customers want to store their data and know it’s there to be accessed when it’s needed. Any relevant system in our industry must be designed with a number of protocols in place to insure the safety of our customer’s data.
But at some point, we all start sounding like the guitar player for Spinal Tap. Yes, our nines go to 11. Where is that point? That’s open for debate. But somewhere around the 8th nine we start moving from practical to purely academic.^ Why? Because at these probability levels, it’s far more likely that: \* An armed conflict takes out data center(s). \* Earthquakes / floods / pests / or other events known as “Acts of God” destroy multiple data centers. \* There’s a prolonged billing problem and your account data is deleted. That last one is particularly interesting. Any vendor selling cloud storage relies on billing its customers. If a customer stops paying, after some grace period, the vendor will delete the data to free up space for a paying customer.
Some customers pay by credit card. We don’t have the math behind it, but we believe there’s a greater than 1 in a million chance that the following events could occur: \* You change your credit card provider. The credit card on file is invalid when the vendor tries to bill it. \* Your email service provider thinks billing emails are SPAM. You don’t see the emails coming from your vendor saying there is a problem. \* You do not answer phone calls from numbers you do not recognize; Customer Support is trying to call you from a blocked number; they are trying to leave voicemails but the mailbox is full. If all those things are true, it’s possible that your data gets deleted simply because the system is operating as designed.
What’s the Point? All Hard Drives Will Fail. Design for Failure.
Durability should NOT be taken lightly. Backblaze, like all the other serious cloud providers, dedicates valuable time and resources to continuously improving durability. As shown above we have 11 nines of durability. More importantly, we continually invest in our systems, processes, and people to make improvements.
Any vendor that takes the obligation to protect customer data seriously is deep into “designing for failure.” That requires building fault tolerant systems and processes that help mitigate the impact of failure scenarios. All hard drives will fail. That is a fact. So the question really is “how have you designed your system so it mitigates failures of any given piece?”
Backblaze’s architecture uses erasure code to reliably get any given file stored in multiple physical locations (mitigating against specific types of failures like a faulty power strip). Backblaze’s business model is profitable and self-sustaining and provides us with the resources and wherewithal to make the right decisions. We also make the decision to do things like publish our hard drive failure rates, our cost structure, and this post. We also have a number of ridiculously intelligent, hard working people dedicated towards improving our systems. Why? Because the obligation around protecting your data goes far beyond the academic calculation of “durability” as defined by hard drive failure rates.
Eleven years in and counting, with over 600 petabytes of data stored from customers across 160 countries, and well over 30 billion files restored, we confidently state that our system has scaled successfully and is reliable. The numbers bear it out and the experiences of our customers prove it.
And that’s the bottom line for data durability.
One aspect of the Poisson distribution is that it assumes that the probability of failure is constant over time. Hard drives, in Backblaze’s environment, exhibit a “bathtub curve” for failures (higher likelihood of failure when they are first turned on and at the forecasted end of usable life). While we ran various internal models to account for that, it didn’t have a practical effect on the calculation. In addition, there’s some debate to be had about what the appropriate model is — at Backblaze, hard drives are thoroughly tested before putting them into our production system (affecting the theoretical extreme front end of the bathtub curve). Given all that, for the sake of a semblance of simplicity, we present a straightforward Poisson calculation.
This is an area where we should emphasize the conceptual nature of this exercise. System design and reality can diverge.
The complexity will break most standard calculators.
Previously, Backblaze published its durability to be 8 nines. At the time, it reflected what we knew about drive failure rates and recovery times. Today, the failure rates are favorable. In addition, we’ve worked on and continue to innovate solutions around speeding up drive replacement time.
Quark matter – an extremely dense phase of matter made up of subatomic particles called quarks – may exist at the heart of neutron stars. It can also be created for brief moments in particle colliders on Earth, such as CERN’s Large Hadron Collider. But the collective behaviour of quark matter isn’t easy to pin down. In a colloquium this week at CERN, Aleksi Kurkela from CERN’s
An object is Fractal when it has the property that the structure of its constituent parts reflect the structure of the whole and at various scales. The classic example would be a tree, where a branch of the tree held upright is as the whole tree, and even a leaf of a tree has in its veins a tree like branching structure. Thus a Fractal object is said to be self similar. Another property of Fractal objects is Nesting, i.e. the smaller constituents that make up the whole object and that reflect the structure of the whole object, are themselves contained within it. But then these smaller copies of the whole, themselves contain even smaller copies reflecting itself and the whole; and so on and so forth potentially ad infinitum.
We have in the above diagram 4 examples of Fractal objects that have the property of self similarity, in that the organization of the constituents parts that make up the object reflect that of the whole and overall object. The two in the top row and one in the bottom left of the diagram are idealized mathematical fractal objects derived from the perfect solids i.e. Tetrahedron, Cube and Dodecahedron. The fractal object at the bottom right of the diagram is an actual existent vegetable that is the genetic cross between the Cauliflower and Brocolli called Romanesco Broccoli or Roman Cauliflower.
We propose that the organization of the brain is also Fractal not just in structure but also in process. So that what we conceptualize for the whole brain in terms of structure and functioning, we may likewise infer and conceptualize the same with regard to the brains constituent parts. That is there should be a way of looking at the brain which shows that the Neuron(i.e. brain cell) is as the Neuron complex(i.e. Cortical Cell Column), is as the Neuron Macrocomplex(e.g. Cortical Patch); and so on for the entire brain. And just in terms of arrangement but also in terms of functioning.
And we find that this is so. For these levels of brain description we find tree like structure of branching inwards towards a nexes and a corresponding branching outwards from it. Just like the roots, trunk and branches of the sort tree you'd find in a forest. And at all levels of description we find a looping back process of recurrent connection where the branching outwards from the nexus feeds back unto the branching inwards to the same. But also in terms of process and functioning we discover self similarity. So for instance, at all levels of description we discover in the brain a process of competition, i.e. synapses compete, neurons compete, cortical columns compete with one another, and then ideas, perceptions and moods also compete. Another Fractal process of the brain is one of linking up and coming togetherness. Synapses link up along a stretch of dendrite, neurons link up through Synaptic connections, Cortical columns links up and then ideas and representations also link up in space and are chained together in time.
The above diagram shows the fractal structure of the cerebral cortex which along with its wiring, makes up most of the mass the human brain. Diagram 1 depicts a pyramidal neuron, the main neuron type of the cerebral cortex and shows its tree like branching structure and a recurrent feedback projection. Diagram 2 depicts a cortical column into which several thousand pyramidal neurons are clustered to form the basic representational unit of the cerebral cortex. Diagram 3 depicts a Columnar Complex made up of many Cortical Columns and diagram 4 depicts 7 interconnected cortical patches, which are themselves sheet like structures consisting of a flat arrangement of many Columnar Complexes densely packed side by side. At all these levels of description we see a basic pattern of branching in, branching out and looping back. So that in this way the brain is organized fractally. This line of reasoning and way of looking at things can also be used with regard the entire brain and all its main information processing constituent structures.
We further propose that this Fractal structuring of the brain and self similar way of conceptualizing its organization and functioning, derives from a recursive generative process which is a property of many Fractal structures. This recursive process starts from a seed or atom of recursion, and through it all brain structure emerges. Furthermore we also propose that all brain functioning and the process of mind is really a continuation of this same recursive process. So that the process by which brains come into being i.e. Ontogenesis or Neurogenesis on the one hand and the process of brain functioning or mechanics of mind on the other; are really expressions of a common underlying process and really exist on a continuum. We have had for years a reason to suspect that this Atom of Recursion or Seed of Brain and Mind may potentially exist, because all biological forms derive from the recursive bifurcating process of cell divisioning starting from the fertilized egg. We believe that the process of the brain and process of thought can be conceptualized as a continuation of this process of ontogenesis and also neurogenesis, i.e. the production of physical substrate of mind.
Taking things further, it is the belief of the Author that there is something quite fundamental about what is happening in the brain and the process of mind. For we believe the Fractal of Brain Structure and Brain Process is perfectly reflected in the Fractal of the Organization of the Universe and the Fractal of Cosmic process. So therefore we should be able to perfect extrapolate from the brain into Society, the World, the Galaxy and on to the entire Universe. And likewise we should be able to interpolate from the Universe, the World and Society into the structure and process of the brain. In fact this correspondence between what is happening in a brains and minds on the one hand and what is happening externally in the world and Universe on the other is something that is suggesting but a lot of existing ideas about the brain, artificial intelligence and cosmology. For instance a way in which contemporary research into brain functioning and Artificial Intelligence is correct is in the idea that evolutionary processes are happening in the brain. This is reflected in approaches such as Genetic Algorithms, Neural Darwinism, Memes etc. Also it has been suggested by one physicist that perhaps there should exist a connection between Epistemology i.e. the study of knowledge and Cosmology.
We believe that by understanding the brain we also understand the Universe and vice versa. So that indeed 'As is the Macrocosm so is the Microcosm' and 'As is the Human Mind so is the Cosmic Mind.' Modern Science has started to find very appealing the idea that the Universe exists as information. What the understanding we are proposing allows us to see is that this information of the Universe is structured as knowledge and that this knowledge is structured as a Cosmic Tree.
All of this understanding ultimately allows us to answer conclusively the question concerning the Nature of Consciousness. And show that the reason why contemporary Philosophers and Neuroscientists haven't been able to find the answer to the problem of Consciousness, is that they've been asking the wrong question all along. For the working assumption has been Materialism or the Ontology position in Philosopher that existence is physical and consists of matter. And that therefore Consciousness reduces to the physical brain and the material Universe. The Truth is that it doesn't. It's the other way round. i.e. that it is the Ontological position in Philosophy called Idealism or the idea that existence is really Consciousness that is the truth. What the Fractal Brain architecture and the reducing of Cosmic Structure to this same Fractal conception allows us to do is to reduce all existence to a Single Consciousness. This approachworks together with the idea that the physical Universe is illusory and that what is behind the illusion is mathematical or platonic existence. So for example the Universe can thought to exist in the same way that the mathematical fractal object called the Mandelbrot set 'exists' i.e. platonic and transcendentally. But what is actually manifest and 'real' is subjective experience or Consciousness of being in the illusory physical Universe. Because the Universe is a Fractal structure that is of the same nature as the brain and are continuous with each other, then the Consciousness of our brains can be reasonably supposed to be of the same nature as the Consciousness that would correspond to equivalent structures of the Universe at various scales i.e. Planetary, Galactic and Cosmic. And so we can therefore put forth the truly astonishing hypothesis that a single Unitary, Undivided and Indivisible Consciousness passes through all these equivalent Fractal structures in a single chain of transmigration. Thereby living out all experiences of the lives of all the living entities existing within the Universe from its beginning to end. One subjective experience at a time, one day at a time, one life at a time. And that this singular all encompassing consciousness may be called God. So the Fractal Brain idea takes us to the truly stunning revelation that the Mystery of Consciousness and the Mystery of God are one and the same. That is the idea that everyone is God, the common belief that is found in all the Esoteric traditions of all the worlds major religions, i.e. Gnosticism in Christianity, Kabballah in Judaism, Sufism in Islam, Tantra in Hindusim etc.
In the theory of the brain that is to come is the key not just to Artificial Intelligence and the Technological Singularity, but also the means by which Science will be reconciled with Spirituality and the true Religion will be resurrected. The revelation of this theory and beginning of this process will occur in London 2012.
The above diagram is meant to show the correspondences between on the one hand various aspects of the Brain Theory, listed in the left most column, and on the other hand, related Scientific and Mystical/Spiritual ideas. It shows that key mystical/spiritual ideas found especially in Esoteric Religion are also inherent aspects of the brain theory, so that the brain theory is the means by which these ancient ideas can make a return and be accepted by mainstream society. This interesting correspondence makes perfect sense when we understand that the mystery of Consciousness and the mystery of God are one and the same. Then we should expect the brain, it's structures and processes, to reflect something of the Divine. This diagram tabulates some of the ways that this is indeed the case. Even more controversially, by including various ideas or facets from Science in this table, we are trying to suggest and give support for the idea that the structure and workings of the Brain are very special and fundamental, in that they contain the microcosmic representation of the structure and workings of the Universe. That is, the brain theory is also a theory of everything outside of the brain, i.e. society, the World, and the entire Universe. So that not only can we interpolate aspects of the world into the brain, for example some prominent neuroscientists suggest that the process of evolution is happening in the dynamics of the brain; but also that we can fully extrapolate from the brain to gain an understanding of the processes, structure and purpose of the Universe i.e. life. The brain is therefore like a Rosetta Stone which fully reflects the overall structure and process of the Universe. To understand the brain is also to understand the Universe. In recent years, Scientist have started to see the fabric of the Universe in terms of information. The next step is to understand that the informational universe is structured as knowledge and that the structure of this knowledge of the Universe is the Cosmic Tree. The process of the Universe i.e. Life, is also Epistemology or the process of knowledge, the brain theory will show how this is the case. Indeed the Microcosm reflects the Macrocosm, and the microcosm of brain reflects the macrocosm of the entire Universe.
The Nature Of Reality
The great Ontological riddle concerning what is the true Nature of Reality is solved by the ultimate truth that a persons real identity is God. Together with some recent discoveries from the world of mathematics, namely the Mandelbrot set, we can finally make the case for Idealism or the idea that all existence is consciousness.
In this section we discuss what is the nature of reality. Essentially what we'll be doing is showing how it is that Idealism or the idea that existence is really subjective, can work in practice. Or to put it another way I am going to show how it is that all existence is really consciousness.
This point of view stands in opposition against the alternative and more widely held view concerning the nature of reality which is called Materialism. This is the belief that existence consists of physical matter. It supposes a material reality and a physical universe upon which the subjective world of consciousness is based. The belief in Materialism is presently the dominant view of reality in the World today. However there are good reasons to suppose that the idea of Materialism may be inherently flawed. These reasons derive from lines of inquiry within philosophy and also some results obtained from the field of quantum physics within science, which studies the sub-atomic basis of material existence. For instance, some of the greatest philosophers of all time namely Descartes, Berkeley and Kant have all come to the same conclusion that we can never have certain knowledge of the physical World and that we can only really know the subjective world i.e. our consciousness. Therefore the idea of Materialism is flawed from the outset, for it tries to reduce something that we do have certain knowledge of, namely our conscious subjective states, into something we can never really know with any certainty as being truly existent, i.e. the physical world. Moving along to the world of science, one of the great physicists of the 20th century namely John Wheeler, who worked with Albert Einstein, summed up what quantum mechanics was telling us about the nature of reality in one sentence, he said 'There's no out there, out there.' This statement is meant to suggest that our belief in an external and independently existing physical reality is one which is being undermined by results from quantum physics.
So having outlined some of the problems inherent with idea of Materialism, what I'll do here is to take Idealism as our starting assumption and show how it is that existence is really consciousness. In so doing we'll be describing an alternative view of reality that also provides answers to some of the biggest puzzles in science and philosophy today. It is also a way of looking at the nature of existence that confirms what the founders of the World's great religions and the great mystics have been telling us all along. That indeed there is an existence above and beyond the physical and temporal. Also that it is this higher super reality which generates the appearance of an external world and the illusion we call physical reality. I shall explain how this illusion works so that it can clearly be seen that indeed all existence is consciousness and furthermore that all consciousness is really one consciousness. This one consciousness can rightly be called God, the ultimate source and ground of all being. So even though this section is called 'The nature of reality', it can just as easily be called 'The nature of God'. This is because God is the ground of all reality, and all the reality that we perceive and are aware of, is really a series of manifestations of the ultimate reality that is God. When we consider reality in its completeness then inevitably we arrive at the divine. Total reality is God and to truly understand the nature of reality as we know it is also to understand the nature of God.
Idealism or the idea that the nature of existence is really consciousness, has always had its adherents throughout the course of human history. However all the while, the trouble with the idea of Idealism has been that it has been impossible to argue convincingly and compelling the case for it. The reason why this is so is because in the past, the difficulty with the idea of Idealism has been how to explain the nature of the material world. It's all very well to say that all existence is really subjective or is made up of consciousness, but the problem then is to provide an explanation for the objective world and the external physical universe. In response to this question, in the past, the only answer that Idealists have been able to come up with is to say that the physical world of matter is somehow illusory without any further explanation of why the illusion of physical reality is so convincing. Obviously there is 'something' behind the appearance of the external physical world, but there has been an explanatory gap as to what this 'something' is. However the situation has changed and this has been brought about by a recent discovery in mathematics and also an invention from the world of computer science. What these discoveries or inventions give us are the necessary conceptual stepping stones and metaphors, which enable us to finally explain the truth behind Idealism. They provide us with insight into what is behind the exquisite beauty, detail and complexity that we find in the universe and also allow us to grasp the role of consciousness in the overall scheme of things.
So what are these new additions to humankind's knowledge that better allow us to understand the nature of existence. The first of these is the discovery of fractal mathematics and in particular the Mandelbrot set. The second is the invention and widespread use of virtual reality computer environments. I discuss both of these things in turn and relate how they enable use to understand the nature of reality and the relationship between consciousness and the external physical world.
The Mandelbrot Set
The first of our discoveries which helps us to understand the nature of reality was discovered in the 1970s by a mathematician named Benoit Mandelbrot. His discovery called the Mandelbrot set is a mathematical object which though succinctly described by a formula which fits onto one line, none the less contains within it an infinity of complexity, pattern and beauty. Here is a picture of the entire Mandelbrot set...
And this is the mathematical formula which describes the Mandelbrot set...
Though this may not at first seem all that remarkable, we really come to understand the significance and majesty of the Mandelbrot set when we realize that we can magnify small portions of the Mandelbrot set to reveal ever more complexity and pattern ad infinitum. That is, if we imagine that we had a special magnifying glass that allowed us to keep looking deeper and deeper into the Mandelbrot set at higher and higher resolutions without limit, then there never comes a point when the emerging novelty and complexity of pattern, comes to an end. We can keep zooming into the Mandelbrot set forever to reveal ever more elaborate form and intricate design. It truly contains within it an endless universe of variety, structure and exquisite detail. Here are some more pictures of small portions of the Mandelbrot set which have been zoomed into...
These pictures of the Mandelbrot set give us an idea of its beauty and intricacy. It is important to point out that there is nothing random in the process by which these pictures are generated. The pictures shown are unvarying aspects of the Mandelbrot set and would come up the same whenever the Mandelbrot mathematical formula, shown earlier, is programmed into a digital computer. It is like an endlessly detailed map or graph which is made up of a never ending array of infinitely myriad inter-locking geometric shapes and patterns. The Mandelbrot set is a mathematical object of stunning beauty and awe inspiring depth. In fact its depth is limitless and indeed infinite.
Now, the question I would like to ask the reader is this. Was the Mandelbrot set discovered or was it invented? That is, did Dr Benoit Mandelbrot come across something that already existed or did he create and define the Mandelbrot set? The answer surely has to be that he discovered the Mandelbrot set. There is no way that Dr Mandelbrot could have mapped out and conceived the infinity of detail and pattern that is contained in the mathematical object bearing his name. Therefore he must have discovered it. The question then is this. Before Dr Mandelbrot discovered the Mandelbrot set, where did it exist? We may just as well equally ask, before the beautifully intricate patterns are rendered onto a VDU screen, by a computer programmed with the an algorithm calculating the Mandelbrot set, where do these patterns exist?
The answer to this question lies in the understanding of a notion known as mathematical or Platonic existence. And it is this idea which also allows us to answer a question posed earlier. We have already put forward the proposition of Idealism which supposes that existence is really consciousness. The problem then is to explain what is behind the illusion of the physical universe. If existence is not material then from where does all the form, pattern and complexity that is in the physical universe arise? Here is where the notion of mathematical or Platonic existence comes in. The proposition here concerning the nature of the physical universe and physical existence is that it is one and the same as the nature of Platonic or mathematical existence. Put another way, the illusory material world only really exists in the same way that the Mandelbrot exists. And in the same way that a digital computer is required to manifest the patterns inherent within the Mandelbrot set, so it is that consciousness is necessary to manifest the objects intrinsic to the physical Universe. Furthermore, in the same way that there is a mathematical formula which describes the Mandelbrot set in its entirety, so it is that there is a mathematical formula which describes the physical Universe down to its every last detail. And so the physical Universe is really a mathematical object in the same way that the Mandelbrot set is.
A brief history of the mathematical Universe
If we trace the history of the relationship between the world of mathematics and abstract ideas on the one hand and the so called 'real world' on the other, then we find an interesting interplay which lends support to the idea that the real nature of the physical Universe is mathematical. Also we start to see that all through history some of the greatest thinkers of all time noted a deep relationship between the physical world and the world of mathematical ideas. We may start by examining the history of the idea of Platonic existence.
Platonic existence is named after the fabled ancient greek philosopher and genius Plato. It was Plato who postulated the existence of perfect 'forms' that existed in a transcendent and immutable realm outside of space and time. So that even though in this physical reality we see all the myriad transient manifestations of existence from horses, people, cats and dogs to stars, mountains and clouds; what Plato was saying is that there exist perfect archetypes of all these things which act as eternal templates to which the transient manifestations conform. And it is these archetypes which exist transcendentally in what is known as the Platonic realm.
The idea of Plato's forms is related to an older idea associated with another great philosopher sage of antiquity called Pythagoras and it was he who asserted that 'The Universe is a number'. He saw a mathematical order behind the Universe that he believed could be uncovered through the intellect and reason. One of his discoveries was the mathematical relationship between musical notes that are harmonic with one another, though he is best known for his Pythagoras theorem which is taught in elementary high school mathematics. There is also an interesting side to him which is not taught in school. He was also a mystic and is believed to have introduced the esoteric mystery traditions into greek civilization having learnt of their secrets in ancient Egypt. In his own life time he was revered as a god man by the members of a religious sect which grew around him. Apart from the idea of the mathematical universe, the many doctrines he taught included reincarnation and vegetarianism. His influence on ancient greek philosophy was immense and he was even the man who is believed to have coined the word philosophy. Plato who is mentioned earlier is himself sometimes referred to as a Pythagorean, which is testimony to the impact of the thinking of Pythagoras.
Going forwards in time now to 17th century England, the great physicist and natural philosopher Isaac Newton once remarked that 'God is a mathematician'. It was Newton who first formulated the laws of gravity and motion using a formal mathematical framework, that he himself also had a great part in developing. In a few succinct mathematical equations he was able to encapsulate and derive all the knowledge about the physical universe that had existed before him. He was able to develop a mathematical system that could not only explain but also predict the motion of planets and other objects within the universe of matter. Thus he observed what Plato and Pythagoras had also envisaged centuries before, that is he saw a transcendent order that existed in the natural world. And in a manner similar to his illustrious forerunners, he also discovered that this hidden pattern or regularity was captured perfectly using the language and concepts of mathematics. This is what led him to proclaim that 'God is a mathematician'.
In more modern times this great puzzle which has to do with the relationship between mathematics and physical reality has remained. Since the time of Newton, a great plethora of 'proven' theories about almost every aspect of the physical Universe have come into being. From Einstein's theory of general relativity to quantum mechanics, and from the theory of thermodynamics to the theories of particle physics. Without exception all these theories are expressed in the language of mathematics. It seems that mathematics has an incredible almost magical ability to capture the essence of how the physical world works. There seems to be a very intimate and close connection between the world of abstract mathematics and the world of matter, energy, space and time, which we observe and inhabit. The nobel prize winning physicist Eugene Wigner summed up the situation in his quoted statement where he talked about ' The Unreasonable Effectiveness of Mathematics in the Natural Sciences'. What he meant was that not only are all physical laws expressed neatly and succinctly using mathematics but also that mathematical ideas, developed completely separately from the world of physics, seem to fit hand in glove as descriptions of the physical world. That is mathematical ideas pursued for their intrinsic beauty and internal consistency, without any consideration regarding any relationship they may have with physical reality, become the perfect basis for the formulation of physical laws. And so in this way modern scientists have also discovered this amazing relationship that exists between mathematics and the physical world, in a way similar to what the likes of Plato, Pythagoras and Newton had also done in the past.
From our survey of the past it would seem that the idea being put forward, that the nature of physical existence is the same as the nature of existence of mathematical objects such as the Mandelbrot set, has in a sense always been in the background. The idea has been alluded to all through the history of science and further supported by more recent lines of thought which suggest that the most fundamental substrate of the material universe is not matter and energy but is rather information. I believe that this currently fashionable trend in science is a useful stepping stone in helping people to understand and accept the idea that the nature of the existence of the physical Universe is ultimately mathematical. That is, it doesn't really exist and that it is consciousness that is the real nature of existence.
Consciousness, the mathematical universe & virtual reality
We'll be discussing here how the existence of computer virtual reality simulated environments provides for us a metaphor for understanding the relationship between the mathematical Universe and consciousness. We have already put forward the idea that the physical Universe that is 'out there', is really a mathematical object that has no real existence. That is it only exists Platonically or mathematically in the same way that the Mandelbrot set exists, which was also described earlier. The next step is then to elaborate upon the relationship between this seemingly real but illusory mathematical physical Universe, and the actually existent subject world of conscious states. Here is where the phenomena of computer virtual reality systems provides for us a ready stepping stone to aid in our understanding of the relationship between the subjective and the objective.
Virtual reality simulations have existed for several decades. In them the objects and dynamics of the 'real' world are modelled in order to allow a computer to animate and create a visual and auditory simulation of the real world. This visual and auditory replica of external reality would then be projected onto some sort of screen and the sound output using loud speakers. In this way a human may feel that he or she is immersed in and is a part of this virtual world so created. So for instance, some of the first virtual reality simulations were used to train pilots to fly various types of aircraft. So in this case the things modelled would include, clouds, the ground including mountains, roads, rivers, airports etc. and also the plane itself would become a part of the simulation. Thus the physical characteristics and performance details of the aircraft would be a part of the virtual reality simulation. More recently virtual reality simulations have been used to create life like game environments where the player may wander in and around buildings and various kinds of simulated terrain, with the objective of shooting things and/or finding some goal. The most advanced virtual reality simulations are getting extremely life like almost to the point of being photo-realistic. Perhaps sometime in the not too distant future, it may be difficult to tell the difference between a virtual reality simulation and real life.
Anyway the point is this, though the virtual reality simulation may seem real, it is only modelled using a database of mathematical data which is used to represent mathematical objects such as polygons, lines and points. So it doesn't really have existence in the physical sense. But then what we have been demonstrating earlier is the notion that the physical Universe itself only exists in the same way that mathematical objects exist. So in the same way that the virtual reality simulation only exists as a mathematical construct and is made visually manifest on a computer VDU screen; so it is that the physical Universe only exists as a mathematical object and is made manifest as states of consciousness. The difference being that in the latter case, the virtual reality simulator is the equation which describes the entire Universe; and the equivalent to the VDU screen is the substance of consciousness or what philosophers would call qualia. So in the same way a computer virtual reality simulator instructs how the colours or pixels on a computer VDU screen should arrange themselves to create the impression of a virtual world, so it is that God uses the equation which describes the mathematical object called the physical Universe to create the impression that we are people living out our lives here on this planet Earth existing within the wider Cosmos.
So the equation which describes the entire physical Universe works together with consciousness to create the existence of things. Consciousness needs the mathematical Universe to create the details and content of our subjective states, at the same time the mathematical universe needs consciousness to make itself manifest. Thus the objective and the subject are seen as two sides of the same coin, an eternal duality that may be called God. This idea will be explored a little later but next we shall be exploring the relationship between all the seemingly separate conscious entities that exist within the Universe. Indeed we shall demonstrate how all consciousness reduces to one consciousness.
How is everything one consciousness?
This sub-section asks 'how is everything one consciousness?', which is really the same as asking 'how does the ontological position of Idealism really work?" or 'how is everything consciousness?'. The answer to these questions and the solution to the problem of showing how it is that everything is consciousness is to demonstrate how it is that everything is one consciousness.
In the physical universe from its beginning to its eventual demise, there have existed and will exist an astronomical number of life forms and we may reasonably suppose that a certain ratio of these life forms will experience internal subjective states, i.e. that they will have consciousness. Now, one of the problems with showing how Idealism can work is that, if you abandon the physical universe as the basis for consciousness or deny that the subjective world arises from the world of matter, then how do you account for consciousness. Also how do you define the relationship between all the conscious beings that exist having done away with the physical world of energy/matter and space/time as the substrate of existence?
The answer to this conundrum is to be found if we take consciousness as the starting point. In fact we take one single indivisible consciousness as the starting point. The next step is that we then propose that all the separate and individual conscious beings that will ever exist in the physical universe are really different stages in the evolution and continual unfolding of the one consciousness that we have already postulated. Therefore this one consciousness, in a sequential manner, encompasses all the seemingly differentiated conscious beings which are really manifestations of this one consciousness. This proposal is related to an idea that two of the greatest thinkers of recent times, namely Richard Feynman the renowned Nobel prize winning physicist and the mathematician Roger Penrose who worked with Stephen Hawking, have both considered seriously. This is the idea that every electron that exists in the physical Universe is really the same electron. It is thought to be able to manifest in a way that makes it seem as if it is at all places at one once through exploiting the phenomenon of quantum tunneling, therefore appearing as a myriad multitude of different electrons.
However with regard to our idea of a single all pervading consciousness, the difference here is that this one consciousness is not appearing at all places in the Universe at the same time. The reason for this is that because we are demonstrating that everything is consciousness, then we are rejecting the idea that the physical Universe is the basis for existence. In doing so we are not only denying the actual existence of matter and energy, we are also denying the actual existence of space and time. That is we are saying that matter, energy and also space and time only exist in the same way that mathematical objects exist. This was explained earlier. Therefore it doesn't make sense to say that the one consciousness is existing at many different places at the same time, because we no longer have space and time as meaningful references to places and moments where consciousness is thought to be manifesting. For we are no longer pre-supposing that consciousness is arising out of a physical substrate, i.e. brains, that are located in certain points in space and time. What we do have as our fundamental assumption is one consciousness. And our starting point becomes the assumption of subjective existence and subjective time, which is the same as the experience of the one consciousness and its evolving stream of subjective experiences.
This one consciousness that then is the central assumption that we use to account for all the myriad seemingly separate conscious entities that are inhabitants of the mathematical object called the physical universe, which you the reader, I the writer and all the other beings on planet Earth are examples of. All these separate entities and seemingly separate consciousnesses are really one consciousness by being the sequential expressions of the one consciousness. Put another way, it is that the one undivided indivisible consciousness that is God is able to be all the separate consciousnesses by being each one of them, one at a time. A metaphor to help the reader visualize this concept is if we imagine every life form that has existed and will ever exist in the physical Universe, as a pearl. This will of course add up to a lot of separate pearls. If we then imagine that this multitude of pearls as being strung out on a single continuous cosmic thread then we get a variety of the 'Great chain of being'. So along this thread will be expressed all the life forms in the Universe. This thread then demonstrates the relationship between all the seemingly separate consciousnesses. That is, they are all different points along the conscious stream of the one consciousness. Put another way it is to say that the relationship between the conscious entities that exist on planet Earth, including all human beings, is that we are each others subjective past and subjective future. So that if we suppose that we are walking down the street and looking around at the people around us then we are really looking at our past and future lives.
We have thus described the way in which the ontological position of Idealism, i.e. the view that existence is really consciousness as opposed to being matter, is the true nature of existence. We have done this by showing what is the actual nature of physical existence, i.e. that it is mathematical, and also by showing that all consciousness is really one consciousness. It is the belief of the Author that this one all encompassing consciousness may rightly and properly be called 'God'. If we examine the holy scriptures of the World's great religions then we discover support for this idea. Essentially what I am saying is that what philosophers and scientists call the mystery of the nature of consciousness, and what theologians call the mystery of the nature of God; are one and the same. They are really two mysteries that are the same mystery, and two puzzles which are really different aspects of the same puzzle. We will examine this idea further.
We are one, though not merely of one flesh but truly of one soul
It is almost inevitable that the ideas just presented will seem extremely unfamiliar and counter-intuitive. However if we examine key metaphysical assertions made concerning the nature of God in the World's great religions we do find an amazing correspondence to the idea of an all encompassing singular consciousness just described. For instance it is an almost universal theme that God is immanent within us all. So we have the idea of the 'Christ within' in Christianity, the 'Krishna within' in Hinduism, Allah who 'closer than your jugular vein' in Islam, the Buddha within etc. At the same time we also discover the idea that we and God really exist as an inseparable unity. Also it is a central tenet of Judaism, Christianity and Islam this idea that God is one. In fact the greatest commandment in Judaism and Christianity begins 'The Lord your God, the Lord is one'. This idea of God's essential unity is also found in Hinduism where in the Bhagavad Gita the 'super soul' or 'over soul' which is God within us, is described thus, ' [it]appears divided but has never divided and is always situated as one'.
Furthermore this idea of unity and oneness is also used to describe our true relationship with one another. So we find this passage in the Koran, 'Your destruction and your resurrection is but that of a single soul'. In the Bible we have the idea of our underlying unity so expressed, 'There is neither Jew nor Greek , slave nor free, male nor female, for you are all one in Christ Jesus[i.e. God].' If we refer back to our idea of a one all encompassing consciousness that we are all different manifestations of, then the idea of the unity of God and the idea of the essential unity of all living things, are then clearly seen to be expressions of the same truth. And that truth is the idea that everyone is God.
When we also consider the idea reincarnation which is an important and significant feature of all the World's great faith traditions, as is demonstrated on another section of this website called 'The truth about eternal life'; then this would correspond with our idea of a single undivided all encompassing consciousness which sequentially expresses every seemingly individual consciousness, one at a time. In this manner each separate conscious entity is therefore really different stages of the life cycle of a single cosmic entity, i.e. God. We see then that the idea of everyone being God and the idea of reincarnation, really go hand in hand. When we consider that the idea that 'Everyone is God is the truth behind all World religion', taken together with the notion that the idea of reincarnation is also a universal truth behind all the World's great faith traditions; then this would give strong confirmation to our idea of a single all encompassing and all reducing consciousness, which transmigrates through each and every living conscious entity in the Universe. This one consciousness that is God having the whole of eternity to experience infinity one embodied consciousness at a time. Therefore the universal truths behind World religion would correspond perfectly to our proposal for explaining the nature of reality, making the case for the ontological position of Idealism and showing that all existence is consciousness.
Summing up and Conclusion
To sum up then, what we have described in this section is an account of how the ontological position of Idealism or the belief that existence is really consciousness is the true nature of things. Using recent discoveries from the field of mathematics and computer science we have constructed a picture of what is behind the illusion of physical reality and have put forward an idea concerning the nature of the physical Universe. Which is that the phyical Universe only really exists Platonically or mathematically, i.e. that the material world only exists in the same way that mathematical objects such as the Mandelbrot set exist. Then using the modern phenomenon of computer virtual reality simulations as a metaphor we described the relationship between the mathematically existent physical Universe and consciousness. We have also shown how all existence reduces to a single undivided and indivisible consciousness.
Finally we explored the direct parallels that exist between the picture of the nature of reality that we have presented and ideas from religion and the world of mysticism. We highlighted the correspondence between ideas concerning the nature of reality and ideas concerning the nature of God suggesting that the two are inextricably linked. Essentially we elaborated upon the idea that the mystery of the nature of consciousness and mystery of the nature of God are one and the same thing.
The conclusion then and what all of this is really saying is this... We normally arrive at the assumption that we are existing as a person associated with a physical body gazing out upon a vast impersonal physical Universe that exists outside of us. However the truth is that we are God gazing through the eyes of a person reflecting upon ourselves, that is the Universe, that is God. We live through God but God also lives through us, experiencing eternity one moment at a time, one day at a time, one life at a time. So that in the same way that as mortal human beings we sleep and awaken to a new day, so it is that as God we die and awaken to a new life, eternally. Living in creation, living out the lives of the creatures and sustaining the created. This is the nature of God, this is the nature of reality.
Graphene aerogel is seven times lighter than air, can balance on a blade of grass
Chinese material scientists have created the world’s lightest material: A graphene aerogel that is seven times lighter than air, and 12% lighter than the previous record holder (aerographite). A cubic centimeter of the graphene aerogel weighs just 0.16 milligrams — or, if you’re having a problem conceptualizing that, a cubic meter weighs just 160 grams (5.6 ounces). The graphene aerogel is so light that an cube inch of the stuff can be balanced on a blade of grass, the stamen of a flower, or the fluffy seed head of a dandelion... ...
In today's data-driven world, data is valuable. Extracting insight, answering questions, and meaningful metrics from data by way of querying and data manipulation is an integral component of SQL in general. This blog presents a combination of 8 interesting, differing PostgreSQL queries or types of queries to explore, study, learn, or otherwise manipulate data sets.