- Audio /
Mar 15, 2016
The 8th Annual Global 'Zeitgeist Day' Symposium Promotes Sustainability, Global Unity, and a Post-Scarcity Society Read More >
Jan 31, 2015
Promotes Global Unity, Social Betterment and a More Humane Society Read More >
Sep 12, 2014
Features Live Music, Short Films, Comedy and Art, Promotes Social Consciousness Through the Power of Art Read More >
Mar 01, 2014
Toronto Main Event and Beyond Read More >
Feb 03, 2014
A New Book by The Zeitgeist Movement Read More >
More Press Releases >
Apr 01, 2016 Host: Casey Davidson
In this episode Casey Davidson (Australian national coordinator for TZM) discusses whether the Zeitgeist Movement should interact with political parties, how to find a balance between making ethical choices and connecting with larger audiences as well as introducing the Brisbane chapter's amusing 'Tinfoil hat scale'.
Mar 20, 2016 Host: Jasiek Luszczki
This episode of TZM global is hosted by Jasiek Luszczki from the Polish chapter of TZM. Today's show features an interview with two activists of the Rotterdam TZM Chapter (Holland) - Anthony Jacobi and Robert Schram.
They talk about their way of utilising the NLRBE-like philosophy and code of conduct within the confines of today's monetary system. They present some ideas on how to move away from "business as usual" (working for profit) to "awareness as usual" (generating social capital) mindset.
Feb 10, 2016 Host: James Phillips
This episode of TZM global is hosted by UK chapter member and TZM education coordinator James Phillips and involves an interview with fellow TZM members Jasiek Thejester and Stefan Kengen from the Polish and Danish chapters of TZM respectively about the recent European meeting held in Rotterdam.
Dec 10, 2015 Host: James Phillips
This episode of TZM global is hosted by UK chapter member and co-coordinator of the movements global educational activism project; TZM education, James Phillips.
Along with other movement related news this episode includes a conversation with fellow TZM education member and Hungarian chapter coordinator, Sztella Kantor regarding her experience of taking the materials of TZM education into schools in Hungary.
If you are interested in taking part in this global initiative then please visit: www.tzmeducation.org
*At the time of publication there was an issue with our podcast provider, blogtalk radio. Therefore the show could only be uploaded in it's edited format to you tube at this time. The full version will be released as soon as this issue is resolved.
Nov 25, 2015 Host: James Phillips
Ep 178 European TZM meeting show - Rotterdam. This episode of TZM Global is hosted by UK chapter team member and co-coordinator of TZM Education (www.tzmeducation.org) James Phillips.
This episode includes an interview with the Global Chapters Administration Coordinator Gilbert Ismail regarding the upcoming European TZM Meetup in Rotterdam next month. For more information, please visit the following link: https://www.facebook.com/events/91743...
Also included in this show is a request for more content for TZM Global Radio. Please send pre-recorded submissions to: email@example.com.
Conventional wisdom would have you believe that most people enter adolescence with a head full of high-minded ideals and a willingness to shake up the system. As they get older, however, they gradually begin to accept the status quo. For me, that process is reversed.
The older I get, the more skeptical I become of our current social model. Why?
Let’s start with this:
It should be of increasing concern to all Americans that there is an extreme disconnect between what Americans believe about man-made climate change, and what science tells us about it. That is to say, despite there being a clear scientific consensus, man-made climate change is more often than not framed as an ambiguous concept in the U.S. mainstream media. Consequently, climate change is generally thought to be far more esoteric than it actually is.
INTRODUCTION AND DISCLAIMER 
The purpose of this project is to enable supporters of a natural law resource based economic model (NLRBE) to understand and appreciate the need to approach the education system in an effort to initiate the value shift required for a more peaceful and sustainable future to emerge.
Today I was reading The Zeitgeist Movement Defined: Realizing a New Train of Thought, again. I did so because I feel the need to express certain frustration on this/my social movement but haven’t found the right words. Also I didn’t want to make any false assumptions on its architecture, so I went straight to the source with a pen in my hand.
I went through the 9 pages that constitute the overview and extracted some notes I would like to post in here:
We need more films about the social, ecological and economic change!
We want to make one and you could help us.
In our Documentary "The Taste of Life" we want to show, that there are people in the whole world, already practicing this change in a great way.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
Transcript below. Can also be viewed via PDF HERE.
Welcome to: “3 Questions - What do you propose?” This thought exercise is intended for both the average person, concerned about global problems – along with those who are still confused about - or perhaps even in opposition to The Zeitgeist Movement.
Peter Joseph, ZDay 2016 "Where we go from here" March 26th, Athens Greece [ The Zeitgeist Movement ]
This article is part of a new series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “https://singularityhub.com/2017/01/11/how-the-most-successful-leaders-will-thrive-in-an-exponential-world/ ">How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today's post, part four in the series, takes a more detailed look at leaders as innovators. Be sure to check out part two of the series, "https://singularityhub.com/2017/02/23/how-leaders-dream-boldly-to-bring-new-futures-to-life/ ">How Leaders Dream Boldly to Bring New Futures to Life," part three of the series, “https://singularityhub.com/2017/03/20/how-all-leaders-can-make-the-world-a-better-place/ ">How All Leaders Can Make the World a Better Place,” and stay tuned for an upcoming article exploring leaders as technologists.
Jeff Bezos is arguably one of today’s most innovative leaders. He is a great example of a leader who imagines possible new futures and has created an organization that puts as much discipline into innovating as it does into bringing those new ideas to life.
In the 20-plus years Amazon has been in business, Bezos has entered and disrupted multiple industries — retail and technology infrastructure, for example — pioneering new business models that make competition irrelevant.
How does Amazon do it?
In a recently released https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm ">shareholder letter Bezos outlined Amazon’s operating principles which he calls “Day 1.” To Bezos, Day 1 represents being a customer-obsessed company that focuses on experimentation, utilizing external trends, being skeptical of information, and making quick decisions. Bezos is obsessed with creating an innovative company and culture: “Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”
Being innovative like Amazon isn’t just valuable in tech — it’s a key skill for all leaders. We live in a fast-paced world, where even the biggest companies can be disrupted.
But making a company innovative from the roots up is no easy task. To lead like an innovator means investing in the right mindset, culture, incentives and support throughout your organization. It means looking beyond a single brilliant idea and instead creating an iterative system focused on value creation and scalable growth.
Innovation Starts With the Right Mindset
Successful innovators know that bringing something new to life — something of value that creates positive impact for customers, partners or communities — requires a mindset focused on discovery of the unknown, not on execution of the existing.
https://singularityhub.com/wp-content/uploads/2017/04/lsk-success-11.png " alt="" width="300" height="301" srcset="http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-success-11.png 300w, http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-success-11-150x150.png 150w, http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-success-11-299x300.png 299w" sizes="(max-width: 300px) 100vw, 300px" />This may sound simple, but embracing a discovery mindset is almost antithetical to how we’ve been taught and trained. Nearly all our formal education focuses on performance rewarding “right” answers and defendable arguments. Our organizations tend to promote individuals who ace their performance reviews, rewarding execution and risk management over divergent thinking and questioning.
And yet, the success that comes with executing what we know and what we’ve historically done is exactly what prevents us from seeing what’s next.
Microsoft’s CEO Satya Nadella believes instilling a culture of discovery is critical to the company’s future success. After taking the reins at Microsoft, he endorsed the importance of a "http://www.businessinsider.com/satya-nadella-instilling-growth-mindset-at-microsoft-2015-6 ">growth mindset," emphasizing learning from others and from our own mistakes to move quickly and find the right path forward. Nadella https://www.wsj.com/articles/ceo-satya-nadella-seeks-to-change-microsofts-image-1477368916 ">says, “We want to push to be more of a learn-it-all culture than a know-it-all culture.”
At the heart of a discovery mindset is the willingness to ask different questions. In his research for his latest book, A More Beautiful Question, author https://singularityhub.com/2017/02/09/why-the-best-innovators-ask-the-most-beautiful-questions/ ">Warren Berger studied hundreds of innovative companies and found the original idea often came from asking divergent questions that “are ambitious and yet actionable, capable of shifting the way we think about something and to serve as a catalyst for change.”
The startup http://caloriecloud.org/ ">Calorie Cloud is a great example of a discovery mindset in practice.
Founders Troy Hickerson and Dan Byler got inspiration for the company after learning about a community church that collectively lost 200,000 pounds through group incentives. Half joking, they said that the community should have donated those calories to kids suffering from acute malnutrition. Then one founder asked, “Can we do that?”
To date, Calorie Cloud has helped exchange over 3.5 billion unwanted calories for much needed ones, inventing the first global market for calories.
So, mindset is important. But you don’t magically shift a whole organization’s mindset overnight. Thinking like an innovator has to become part of the culture.
Innovation Requires Investing in Culture
In his book https://www.amazon.com/Geography-Genius-Creative-Ancient-Silicon/dp/1451691653 ">Geographies of Genius, Eric Weiner explores why some cities were more creative and productive in certain moments of history than others.
After deep investigations of cities like ancient Athens during its Golden Age, Florence during the Renaissance, and present day Silicon Valley, Weiner finds their success comes back to a simple philosophical principle first coined by the great philosopher Plato: “What is honored in a country will be cultivated there.”
https://singularityhub.com/wp-content/uploads/2017/04/lsk-innovation-21.png " alt="" width="300" height="301" srcset="http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-innovation-21.png 300w, http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-innovation-21-150x150.png 150w, http://cdn.singularityhub.com/wp-content/uploads/2017/04/lsk-innovation-21-299x300.png 299w" sizes="(max-width: 300px) 100vw, 300px" />In other words, what each city valued is what received investment, attention, talent and rewards, and by honoring and valuing the right things, we can achieve greatness.
At Pixar, one of the world’s most creative and innovative companies, the culture honors creative and technical talent working together to tell great stories. They highly value peer support and honest feedback across all levels of the company.
Catmull says Pixar is special because its workers have each other’s backs. They all want to do great work, but instead of going solo, they embrace an “all for one, one for all” mentality, productively criticizing each other’s ideas to make them better.
“Management’s job is not to prevent risk but to build the capability to recover when failures occur,” Catmull writes in his business classic Creativity, Inc. “It must be safe to tell the truth. We must constantly challenge all of our assumptions and search for the flaws that could destroy our culture.”
As Catmull notes, a culture of innovation starts at the top. Leaders have to embrace it first and then work tirelessly to protect, nurture, and reward innovative behavior.
Once innovation is baked into your culture, you can get down to work. But you still won’t get wildly creative and useful ideas without setting a few ground rules.
Innovation Is a Teachable and Learnable Discipline
Innovation is not driven by a single great idea or the result of magical serendipity; innovation is a process of disciplined exploration and experimentation.
There are many “playbooks” for innovation — lean, agile, and design-thinking, among others. Regardless of which you use, practicing innovation requires we learn seven essential skills that aren’t often taught in traditional education or training.
It All Starts With the Customer: Learn how to observe and see things objectively from a customer’s perspective. This allows us to identify what they really need not what we believe our current capabilities and features do for them.
Don’t Fly Solo: Learn how to engage and collaborate with colleagues and partners who bring diverse experience and perspectives to the effort. This might be nurtured by http://designabetterbusiness.com/2016/12/02/welcome-to-the-war-room/ ">creating a war room, an unstructured space to allow new ideas to grow.
Tell Stories. Think and Work Visually: Practice presenting ideas in a compelling way — through story, metaphor and visualization — to help overcome our need for endless research data to ensure our ideas are the right ones.
Keep It Simple: Practice finding the simple idea hidden in the complexity. This is an ability to step back and see the big picture as well as break the big picture down to find the most critical thing that will lead to success now.
Set up Small Experiments: When we experiment, we learn if the value we’ve identified and how we deliver it are truly valued by our customers.
Embrace Uncertainty: When we expect change and cultivate a mindset of continuous learning and growth, we become an architect of hope for others.
Working at innovation gives us a chance to practice and learn these skills in rapid cycles. This has a flywheel effect — not only are ideas advanced in appropriate and measured ways, but also capability is catalyzed and scaled within the organization.
We Are All Capable of Being Innovators
Innovation takes more than lip service. It takes more than a brainstorming session where we generate a lot of ideas but few lead to meaningful change. Many of us have lots of ideas, but ideas alone do not necessarily result in successful innovation.
True innovation requires deep persistence and fierce resolve, a willingness to move forward when you’re not quite sure you will be successful, rapid adaptive cycles of learning, and the ability to connect and galvanize networks of diverse and committed talent to work together towards a larger goal.
In a recent interview, Apple CEO Tim Cook talked about the value of diversity in their organization. “Our best work comes from the diversity of ideas and people. We believe in a modern definition of diversity — the big D — which supports creative friction and its contribution to making better products.”
When you take the role of spokesperson or evangelist for innovation in your organization, you have the opportunity to bring about the improvement you want to see in the world and to inspire your colleagues to build the skills to do the same. In doing so, you can ignite and scale positive impact and architect a better future for all of us.
The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.
– Ibn al-Haytham (965-1040 CE)
Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about http://www.nature.com/news/scientists-may-be-reaching-a-peak-in-reading-habits-1.14658 " target="_blank">250 papers a year. Meanwhile, the quality of the scientific literature has been in decline. Some recent http://www.nature.com/nature/journal/v483/n7391/full/483531a.html " target="_blank">studies found that the majority of biomedical papers were https://aeon.co/essays/it-s-time-for-science-to-abandon-the-term-statistically-significant " target="_blank">irreproducible.
The twin challenges of too much quantity and too little quality are rooted in the finite neurological capacity of the human mind. Scientists are deriving hypotheses from a smaller and smaller fraction of our collective knowledge and consequently, more and more, asking the wrong questions, or asking ones that have already been answered. Also, human creativity seems to depend increasingly on the stochasticity of previous experiences – particular life events that allow a researcher to notice something others do not. Although chance has always been a factor in scientific discovery, it is currently playing a much larger role than it should.
One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated?
I believe it can, using an approach that we have known about for centuries. The answer to this question can be found in the work of Sir Francis Bacon, the 17th-century English philosopher and a key progenitor of modern science.
The first reiterations of the scientific method can be traced back many centuries earlier to Muslim thinkers such as Ibn al-Haytham, who emphasised both empiricism and experimentation. However, it was Bacon who first formalised the scientific method and made it a subject of study. In his book Novum Organum (1620), he proposed a model for discovery that is still known as the Baconian method. He argued against syllogistic logic for scientific synthesis, which he considered to be unreliable. Instead, he proposed an approach in which relevant observations about a specific phenomenon are systematically collected, tabulated and objectively analysed using inductive logic to generate generalisable ideas. In his view, truth could be uncovered only when the mind is free from incomplete (and hence false) axioms.
The Baconian method attempted to remove logical bias from the process of observation and conceptualisation, by delineating the steps of scientific synthesis and optimising each one separately. Bacon’s vision was to leverage a community of observers to collect vast amounts of information about nature and tabulate it into a central record accessible to inductive analysis. In Novum Organum, he wrote: ‘Empiricists are like ants; they accumulate and use. Rationalists spin webs like spiders. The best method is that of the bee; it is somewhere in between, taking existing material and using it.’
The Baconian method is rarely used today. It proved too laborious and extravagantly expensive; its technological applications were unclear. However, at the time the formalisation of a scientific method marked a revolutionary advance. Before it, science was metaphysical, accessible only to a few learned men, mostly of noble birth. By rejecting the authority of the ancient Greeks and delineating the steps of discovery, Bacon created a blueprint that would allow anyone, regardless of background, to become a scientist.
Bacon’s insights also revealed an important hidden truth: the discovery process is inherently algorithmic. It is the outcome of a finite number of steps that are repeated until a meaningful result is uncovered. Bacon explicitly used the word ‘machine’ in describing his method. His scientific algorithm has three essential components: first, observations have to be collected and integrated into the total corpus of knowledge. Second, the new observations are used to generate new hypotheses. Third, the hypotheses are tested through carefully designed experiments.
If science is algorithmic, then it must have the potential for automation. This futuristic dream has eluded information and computer scientists for decades, in large part because the three main steps of scientific discovery occupy different planes. Observation is sensual; hypothesis-generation is mental; and experimentation is mechanical. Automating the scientific process will require the effective incorporation of machines in each step, and in all three feeding into each other without friction. Nobody has yet figured out how to do that.
Experimentation has seen the most substantial recent progress. For example, the pharmaceutical industry commonly uses automated high-throughput platforms for drug design. Startups such as Transcriptic and Emerald Cloud Lab, both in California, are building systems to automate almost every physical task that biomedical scientists do. Scientists can submit their experiments online, where they are converted to code and fed into robotic platforms that carry out a battery of biological experiments. These solutions are most relevant to disciplines that require intensive experimentation, such as molecular biology and chemical engineering, but analogous methods can be applied in other data-intensive fields, and even extended to theoretical disciplines.
Automated hypothesis-generation is less advanced, but the work of Don Swanson in the 1980s provided an important step forward. He demonstrated the existence of hidden links between unrelated ideas in the scientific literature; using a simple deductive logical framework, he could connect papers from various fields with no citation overlap. In this way, Swanson was able to https://www.ncbi.nlm.nih.gov/pubmed/3797213 " target="_blank">hypothesize a novel link between dietary fish oil and Reynaud’s Syndrome without conducting any experiments or being an expert in either field. Other, more recent approaches, such as those of Andrey Rzhetsky at the University of Chicago and Albert-László Barabási at Northeastern University, rely on mathematical modeling and graph theory. They incorporate large datasets, in which knowledge is projected as a network, where nodes are concepts and links are relationships between them. Novel hypotheses would show up as undiscovered links between nodes.
The most challenging step in the automation process is how to collect reliable scientific observations on a large scale. There is currently no central data bank that holds humanity’s total scientific knowledge on an observational level. Natural language-processing has advanced to the point at which it can automatically https://nlp.stanford.edu/pubs/gupta-manning-ijcnlp11.pdf " target="_blank">extract not only relationships but also context from scientific papers. However, major scientific publishers have placed severe restrictions on text-mining. More important, the text of papers is biased towards the scientist’s interpretations (or misconceptions), and it contains synthesised complex concepts and methodologies that are difficult to extract and quantify.
Nevertheless, recent advances in computing and networked databases make the Baconian method practical for the first time in history. And even before scientific discovery can be automated, embracing Bacon’s approach could prove valuable at a time when pure reductionism is reaching the edge of its usefulness.
Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge. It would also provide a much-needed reminder of what science is supposed to be: truth-seeking, anti-authoritarian, and limitlessly free.
This article was originally published at https://aeon.co " target="_blank">Aeon and has been republished under Creative Commons.
Banner Image Credit: https://commons.wikimedia.org/wiki/File:Francis_Bacon,_Viscount_St_Alban_from_NPG_(2).jpg ">Portrait of Sir Francis Bacon by John Vanderbank/Wikimedia Commons
Google recently bared the inner workings of its dedicated machine learning chip, the TPU, marking the latest skirmish in the arms race for AI hardware supremacy.
Shorthand for Tensor Processing Unit, the chip has been tailored for use with Google’s open-source machine learning library TensorFlow, and has been in use in Google’s data centers since 2015. But earlier this month the company https://arxiv.org/abs/1704.04760 ">finally provided performance figures for the device.
The company says the current generation of TPUs are designed for inference — using an already trained neural network to carry out some kind of function, like recognizing voice commands through Google Now. On those tasks, the firm says the TPU is 15 to 30 times faster than contemporary GPUs and CPUs, and equally important, they are 30 to 80 times more power-efficient.
For context, CPUs, or central processing units, are the processors that have been at the heart of most computers since the 1960s. But they are not well-suited to the incredibly high computational requirements of modern machine learning approaches, in particular deep learning.
In the late 2000s, researchers discovered that graphics cards were better suited for the highly parallel nature of these tasks, and GPUs, or graphics processing units, became the de facto technology for implementing neural networks. But as Google’s use of machine learning continued to expand, they wanted something custom built for their needs.
“The need for TPUs really emerged about six years ago, when we started using computationally expensive deep learning models in more and more places throughout our products. The computational expense of using these models had us worried,” lead engineer Norm Jouppi https://cloudplatform.googleblog.com/2017/04/quantifying-the-performance-of-the-TPU-our-first-machine-learning-chip.html ">writes in a blog post.
“If we considered a scenario where people use Google voice search for just three minutes a day and we ran deep neural nets for our speech recognition system on the processing units we were using, we would have had to double the number of Google data centers!”
Nvidia, for its part, says the comparison isn’t entirely fair. Google compared its TPU against a server-class Intel Haswell CPU and an Nvidia K80 GPU, but there have been two generations of Nvidia GPUs since then. Intel has kept quiet, but Haswell is also three generations old.
“While NVIDIA’s Kepler-generation GPU, architected in 2009, helped awaken the world to the possibility of using GPU-accelerated computing in deep learning, it was never specifically optimized for that task,” the company says in https://blogs.nvidia.com/blog/2017/04/10/ai-drives-rise-accelerated-computing-datacenter/ ">a blog post.
To make their point, this was accompanied by their own benchmarks, which pointed to their latest P40 GPU being twice as fast. But importantly, the TPU still blows it out of the water on power consumption, and it wouldn’t be surprising that Google is already readying or even using a new generation of TPUs that improve on this design.
That said, it isn’t going to upend the chip market. Google won’t be selling the TPU to competitors and it is entirely focused on inferencing. Google still uses copious amounts of Nvidia’s GPUs for training, which explains the muted nature of the company’s rebuttal.
Google is also probably one of the few companies in the world with the money and the inclination to build a product from scratch in a completely new domain. But it is also one of the world's biggest processor purchasers, so the fact that it has decided the only way to meet its needs is to design its own is a warning sign for chip makers.
Indeed, that appears to be part of the idea. “Google’s release of this research paper is intended to raise the level of discussion amongst the machine learning community and the chip makers that it is time for an off-the-shelf merchant solution for running inference at scale,” http://www.networkworld.com/article/3190122/hardware/6-reasons-why-google-built-its-own-ai-chip.html ">writes Steve Patterson in NetworkWorld.
This is probably not too far off, analyst Karl Freund https://www.forbes.com/sites/moorinsights/2017/04/13/googles-tpu-for-ai-is-really-fast-but-does-it-matter/amp/ ">writes in Forbes. “Given the rapid market growth and thirst for more performance, I think it is inevitable that silicon vendors will introduce chips designed exclusively for machine learning.”
Nvidia is unlikely to let its market leading position slip, and later this year Intel will release the first chips powered by the machine learning-focused https://www.technologyreview.com/s/602137/intel-buys-a-startup-to-catch-up-in-deep-learning/ ">Nervana technology it acquired last August. Even mobile players are getting in on the act.
Arm’s Dynamiq microarchitecture will allow customers to http://www.theverge.com/2017/3/21/14998100/arm-new-dynamiq-microarchitecture-ai-chip-design ">build AI accelerators directly into chips to bring native machine learning to devices like smartphones. And Qualcomm’s Project Zeroth has released a software development kit that can http://www.theverge.com/2016/5/2/11538122/qualcomm-deep-learning-sdk-zeroth ">run deep learning programs on devices like smartphones and drones featuring its Snapdragon processors.
Google’s release of the TPU may be just a gentle nudge to keep them heading in the right direction.
Image Credit: http://www.shutterstock.com ">Shutterstock
https://www.wired.com/2017/04/facebook-spaces-vr-for-your-friends/ " target="_blank">Facebook's Bizarre VR App Is Exactly Why Zuck Bought Oculus
Peter Rubin | WIRED
"When you launch Spaces from within your Oculus Rift headset, though, it logs into your Facebook account. The same one that you, along with nearly 2 billion other people on the planet, use on a regular basis. The same one you’ve already populated with all the information that Spaces now can serve back to you—like, for instance, a selection of your photos that you can use to create an avatar."
PRIVACY & SECURITY
https://www.recode.net/2017/4/18/15264908/surveillance-robots-network-cornell-suspects " target="_blank">These Surveillance Robots Will Work Together to Chase Down Suspects
April Glaser | recode
"Imagine if the camera that saw the crime was a wheeled robot equipped with facial recognition technology that can share information instantly with other nearby robotic cameras—all programed to surveil a scene and pursue suspects to keep them in sight...One day, the software might be able to manage and coordinate hundreds of robotic cameras."
https://www.fastcodesign.com/90110603/snapchats-amazing-new-filters-drop-digital-stuff-into-your-real-world " target="_blank">Snapchat's Amazing New Filters Drop Digital Stuff Into Your Real World
Marc Wilson | Fast Company
"All you do is drag, drop, and pinch-to-scale. Then, as you walk around with your phone, the object stays put like a digital sculpture. There’s no doubt, given Snap’s unprecedented track record in leveraging augmented reality into successful ad campaigns, we’ll see sponsored World Lenses soon."
https://futurism.com/this-crispr-pill-could-replace-antibiotics/ " target="_blank">A New "CRISPR Pill" Makes Bacteria Destroy Its Own DNA
Dom Galeon | Futurism
"Currently, this probiotic is still in its early stages, according to van Pijkeren, and is yet to be tested on animals. Luckily, similar studies have proven the effectiveness of bacteriophage-delivered CRISPR in killing bacteria. However, researchers still have some concerns...CRISPR is ideal for this use because such drugs would be very specific to the user."
http://spectrum.ieee.org/automaton/robotics/drones/marines-testing-disposable-gliding-delivery-drones " target="_blank">U.S. Marines Testing Disposable Delivery Drones
Evan Ackerman | IEEE Spectrum
"Making the drone disposable keeps costs way, way down, and the hope is that the total cost for a production TACAD glider will end up somewhere between US $1,500 and $3,000. The TACAD will be able to 'deliver food, water, batteries, fuel and other supplies at the same price and precision as existing aerial delivery systems.'"
Image source: https://www.shutterstock.com/image-photo/milan-italy-may-27-2016-close-428687383?src=QDwtEBH2whroi955mK9_Xg-1-17 " target="_blank">Shutterstock
Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighborhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.
The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.
Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) — the British equivalent (more or less) of the US National Security Agency (NSA) — who wanted to protect communications between a large number of people in a security organization. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.
Now cryptographers think that a new kind of computer based on quantum physics could make public-key cryptography insecure. Bits in a normal computer are either 0 or 1. Quantum physics allows bits to be in a superposition of 0 and 1, in the same way that Schrödinger’s cat can be in a superposition of alive and dead states. This sometimes lets quantum computers explore possibilities more quickly than normal computers. While no one has yet built a quantum computer capable of solving problems of nontrivial size (unless they kept it secret), over the past 20 years, researchers have started figuring out how to write programs for such computers and predict that, once built, quantum computers will quickly solve ‘hidden subgroup problems’. Since all public-key systems currently rely on variations of these problems, they could, in theory, be broken by a quantum computer.
Cryptographers aren’t just giving up, however. They’re exploring replacements for the current systems, in two principal ways. One deploys quantum-resistant ciphers, which are ways to encrypt messages using current computers but without involving hidden subgroup problems. Thus they seem to be safe against code-breakers using quantum computers. The other idea is to make truly quantum ciphers. These would ‘fight quantum with quantum’, using the same quantum physics that could allow us to build quantum computers to protect against quantum-computational attacks. Progress is being made in both areas, but both require more research, which is currently being done at universities and other institutions around the world.
Yet some government agencies still want to restrict or control research into cryptographic security. They argue that if everyone in the world has strong cryptography, then terrorists, kidnappers and child pornographers will be able to make plans that law enforcement and national security personnel can’t penetrate.
But that’s not really true. What is true is that pretty much anyone can get hold of software that, when used properly, is secure against any publicly known attacks. The key here is ‘when used properly’. In reality, hardly any system is always used properly. And when terrorists or criminals use a system incorrectly even once, that can allow an experienced codebreaker working for the government to read all the messages sent with that system. Law enforcement and national security personnel can put those messages together with information gathered in other ways — surveillance, confidential informants, analysis of metadata and transmission characteristics, etc — and still have a potent tool against wrongdoers.
In his http://users.telenet.be/d.rijmenants/secret_writing.pdf " target="_blank">essay ‘A Few Words on Secret Writing’ (1841), Edgar Allan Poe wrote: ‘[I]t may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.’ In theory, he has been proven wrong: when executed properly under the proper conditions, techniques such as quantum cryptography are secure against any possible attack by Eve. In real-life situations, however, Poe was undoubtedly right. Every time an ‘unbreakable’ system has been put into actual use, some sort of unexpected mischance eventually has given Eve an opportunity to break it. Conversely, whenever it has seemed that Eve has irretrievably gained the upper hand, Alice and Bob have found a clever way to get back in the game. I am convinced of one thing: if society does not give ‘human ingenuity’ as much room to flourish as we can manage, we will all be poorer for it.
This article was originally published at https://aeon.co " target="_blank">Aeon and has been republished under Creative Commons.
Banner Image Credit: https://www.flickr.com/photos/brewbooks/3318600273 ">Brewbooks/US Navy/Flickr
Like islands jutting out of a smooth ocean surface, dreams puncture our sleep with disjointed episodes of consciousness. How states of awareness emerge from a sleeping brain has long baffled scientists and philosophers alike.
For decades, scientists have associated dreaming with rapid eye movement (REM) sleep, a sleep stage in which the resting brain paradoxically generates high-frequency brain waves that closely resemble those of when we’re awake.
Yet dreaming isn’t exclusive to REM sleep. A series of oddball reports also found signs of dreaming during non-REM deep sleep, when the brain is dominated by slow-wave activity—the opposite of an alert, active, conscious brain.
Now, thanks to http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">a new study published in http://www.nature.com/neuro/index.html ">Nature Neuroscience, we may have an answer to the tricky dilemma.
By closely monitoring the brain waves of sleeping volunteers, a team of scientists at the http://www.wisc.edu/ ">University of Wisconsin pinpointed a local “hot spot” in the brain that fires up when we dream, regardless of whether a person is in non-REM or REM sleep.
“You can really identify a signature of the dreaming brain,” https://www.theguardian.com/science/2017/apr/10/scientists-identify-parts-of-brain-involved-in-dreaming ">says study author Dr. Francesca Siclari.
What’s more, using an algorithm developed based on their observations, the team could accurately predict whether a person is dreaming with nearly 90 percent accuracy, and—here’s the crazy part—roughly parse out the content of those dreams.
“[What we find is that] maybe the dreaming brain and the waking brain are much more similar than one imagined,” https://www.theguardian.com/science/2017/apr/10/scientists-identify-parts-of-brain-involved-in-dreaming ">says Siclari.
The study not only opens the door to modulating dreams for PTSD therapy, but may also help researchers better tackle the perpetual mystery of consciousness.
“The importance beyond the article is really quite astounding,” https://www.theguardian.com/science/2017/apr/10/scientists-identify-parts-of-brain-involved-in-dreaming ">says Dr. Mark Blagrove at http://www.swansea.ac.uk/ ">Swansea University in Wales, who was not involved in the study.
The anatomy of sleep
During a full night’s sleep we cycle through different sleep stages characterized by distinctive brain activity patterns. Scientists often use EEG to precisely capture each sleep stage, which involves placing 256 electrodes against a person’s scalp to monitor the number and size of brainwaves at different frequencies.
When we doze off for the night, our brains generate low-frequency activity that sweeps across the entire surface. These waves signal that the neurons are in their “down state” and unable to communicate between brain regions—that’s why low-frequency activity is often linked to the loss of consciousness.
These slow oscillations of non-REM sleep eventually transform into high-frequency activity, signaling the entry into REM sleep. This is the sleep stage traditionally associated with vivid dreaming—the connection is so deeply etched into sleep research that reports of dreamless REM sleep or dreams during non-REM sleep were largely ignored as oddities.
These strange cases tell us that our current understanding of the neurobiology of sleep is incomplete, and that’s what we tackled in this study, http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">explain the authors.
To reconcile these paradoxical results, Siclari and team monitored the brain activity of 32 volunteers with EEG and woke them up during the night at random intervals. The team then asked the sleepy participants whether they were dreaming, and if so, what were the contents of the dream. In all, this happened over 200 times throughout the night.
Rather than seeing a global shift in activity that correlates to dreaming, the team surprisingly uncovered a brain region at the back of the head—the posterior “hot zone”—that dynamically shifted its activity based on the occurrence of dreams.
Dreams were associated with a decrease in low-frequency waves in the hot zone, along with an increase in high-frequency waves that reflect high rates of neuronal firing and brain activity—a sort of local awakening, irrespective of the sleep stage or overall brain activity.
“It only seems to need a very circumscribed, a very restricted activation of the brain to generate conscious experiences,” https://www.theguardian.com/science/2017/apr/10/scientists-identify-parts-of-brain-involved-in-dreaming ">says Siclari. “Until now we thought that large regions of the brain needed to be active to generate conscious experiences.”
That the hot zone leaped to action during dreams makes sense, http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">explain the authors. Previous work showed stimulating these brain regions with an electrode can induce feelings of being “http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">in a parallel world." The hot zone also contains areas that integrate sensory information to build a virtual model of the world around us. This type of simulation lays the groundwork of our many dream worlds, and the hot zone seems to be extremely suited for the job, http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">say the authors.
If an active hot zone is, in fact, a “dreaming signature,” its activity should be able to predict whether a person is dreaming at any time. The authors crafted an algorithm based on their findings and tested its accuracy on a separate group of people.
“We woke them up whenever the algorithm alerted us that they were dreaming, a total of 84 times,” the researchers http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">say.
Overall, the algorithm rocked its predictions with http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">roughly 90 percenthttp://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html "> accuracy—it even nailed cases where the participants couldn’t remember the content of their dreams but knew that they were dreaming.
Since the hot zone contains areas that process visual information, the researchers wondered if they could get a glimpse into the content of the participants’ dreams simply by reading EEG recordings.
Dreams can be purely perceptual with unfolding narratives, or they can be more abstract and “thought-like,” the team http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">explains. Faces, places, movement and speech are all common components of dreams and processed by easily identifiable regions in the hot zone, so the team decided to focus on those aspects.
Remarkably, volunteers that reported talking in their dreams showed activity in their language-related regions; those who dreamed of people had their facial recognition centers activate.
"This suggests that dreams recruit the same brain regions as experiences in wakefulness for specific contents," http://www.med.wisc.edu/news-events/activity-in-the-brains-hot-zone-predicts-dreams-during-sleep/50715 ">says Siclari, adding that previous studies were only able to show this in the “twilight zone,” the transition between sleep and wakefulness.
Finally, the team asked what happens when we know we were dreaming, but can’t remember the specific details. As it happens, this frustrating state has its own EEG signature: remembering the details of a dream was associated with a spike in high-frequency activity in the frontal regions of the brain.
This raises some interesting questions, such as whether the frontal lobes are important for lucid dreaming, a meta-state in which people recognize that they’re dreaming and can alter the contents of the dream, http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">says the team.
The team can’t yet explain what is activating the hot zone during dreams, but the answers may reveal whether dreaming has a biological purpose, such as https://www.scientificamerican.com/article/what-is-dreaming-and-what-does-it-tell-us-about-memory-excerpt/ ">processing memories into larger concepts of the world.
Mapping out activity patterns in the dreaming brain could also lead to ways to directly manipulate our dreams using non-invasive procedures such as https://en.wikipedia.org/wiki/Transcranial_direct-current_stimulation ">transcranial direct-current stimulation. Inducing a dreamless state could help people with insomnia, and disrupting a fearful dream by suppressing dreaming may potentially allow patients with PTSD a good night’s sleep.
Dr. Giulo Tononi, the lead author of this study, believes that the study’s implications go far beyond sleep.
"[W]e were able to compare what changes in the brain when we are conscious, that is, when we are dreaming, compared to when we are unconscious, during the same behavioral state of sleep," he http://www.med.wisc.edu/news-events/activity-in-the-brains-hot-zone-predicts-dreams-during-sleep/50715 ">says.
During sleep, people are cut off from the environment. Therefore, researchers could hone in on brain regions that truly support consciousness while avoiding confounding factors that reflect other changes brought about by coma, anesthesia or environmental stimuli.
“This study suggests that dreaming may constitute a valuable model for the study of consciousness,” http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4545.html ">says Tononi.
Image Credit: http://www.shutterstock.com ">Shutterstock
Attempts to distill the essence of “selfhood” have occupied philosophers for centuries. Consensus has been fleeting at best, but is likely to get even harder as genetic tools allow us to tweak our bodies and potentially our minds.
DNA-based technology’s entry into the mainstream has been picking up lately. Just last week, https://www.wired.com/2017/04/23andme-won-back-right-foretell-diseases/ ">the FDA approved a genetic testing kit from Californian company 23andMe that lets customers find out how their genes could contribute to their chances of developing 10 diseases or passing them on to their children.
For the time being, this is where this technology is primarily directed—forewarning those whose DNA conspires against them. But rapid advances mean it is becoming increasingly feasible to go further and start editing out this defective code, either using gene therapies or editing genes in the embryo.
As the authors of http://science.sciencemag.org/content/356/6334/139 ">an essay in Science last week noted, the imperative to help those afflicted by genetic disease could be causing us to ignore the significance of what it means to tinker with our genetic makeup.
“The urgency to rebuild ourselves following disease and injury impels many patients to want therapies now, without a concern for how the technologies being used on our cells or bodies may affect human identity,” they write.
What constitutes human identity or personhood is an ongoing matter of debate. A major fault line is on the question of whether there is any separation between our physical bodies and our minds and whether mental phenomena are more than just electrical activity in the brain.
"Genetic technologies are likely to force us to revisit historical arguments for what constitutes personhood."
Neuroscience research has already had a profound impact on this discussion, https://blogs.scientificamerican.com/mind-guest-blog/what-neuroscience-says-about-free-will/ ">casting doubt on the concept of free will and identifying patterns of neural activity that https://www.forbes.com/sites/jenniferhicks/2013/06/25/using-brain-signals-to-read-emotions/#46bd29895d9c ">correlate with complex human mental states like emotions. Similarly, genetic technologies are likely to force us to revisit historical arguments for what constitutes personhood.
“If the self is partly but not wholly the physical body, then does it matter if we edit a gene, replace cells, or change an organ?” ask the authors of the essay. “Because our body is part of, or contributes appreciably to, our identity and how we see ourselves, alterations in its structure or function may affect that identity.”
Using genetic technology to cure a chronic genetic disease will clearly completely change that person’s identity, from a lifelong patient to a healthy and productive member of society. Few would argue that this change in identities is a bad thing, but the waters become murkier if we begin to tie genes to more abstract ideas like violence, depression or gambling.
As undesirable as these traits may be, they are intrinsic to our identities and removing them would fundamentally change who we are. And while still a long way off, it seems almost inevitable that these kinds of techniques will eventually be used, not just for prevention, but also to enhance our physical and cognitive abilities, potentially tailoring our identities or those of our children.
An increasing focus on the genetic component of our identities could result in a form of socio-genetic engineering, https://blogs.scientificamerican.com/guest-blog/how-identity-evolves-in-the-age-of-genetic-imperialism/ ">ethicists Eleonore Pauwels and Jim Dratwa argue in Scientific American. As the ability to tweak our DNA improves, the impulse to use it to match our identities to prescribed ideals and norms could become hard to avoid, with the ultimate result of reducing the diversity of human identities we have today.
We are more than the sum of the genes we inherited, though. We are also shaped by our environment, not only in terms of our experiences but also in more concrete ways. The field of epigenetics has demonstrated that our DNA is not a monolithic set of instructions meant to be read from start to finish. Instead, factors like environmental triggers and age can alter which of our genes are switched on or off.
“The more we learn about the complexity of the gene regulatory networks, the riskier it will be to predict how any manipulation may ultimately affect all aspects of our phenotype,” say the authors of the Nature essay.
And it’s not as though genetic editing is the only way human identities can be shifted. A traumatic brain injury can often cause dramatic changes in behavior, point out the authors, and learning from experience can actually result in rewiring of the brain, altering that person’s identity. Despite that, it would be unwise to discount the ability of genetic technology to profoundly affect who we are.
"Will humanity split into sub-species of enhanced and non-enhanced humans? Will enhanced humans still be humans?"
More broadly, related questions about what it means to be a human being are also likely to come to the fore. The cost of this kind of treatment will mean its application will almost certainly be uneven. Will humanity split into sub-species of enhanced and non-enhanced humans? Will enhanced humans still be humans?
Researchers are already experimenting with human embryos that are part human, part animal. These so-called chimeras are aimed at creating better animal models for medical research and potentially allowing human organs destined for transplant to grow in other species. But what is the status of these embryos? Do they deserve the same protections human embryos receive?
— Singularity Hub (@singularityhub) https://twitter.com/singularityhub/status/854357997182738432 ">April 18, 2017
Much further down the line, it’s conceivable that people will decide certain animal traits are desirable—such as luminescence or regenerative capabilities—and could use genetic technologies to incorporate them into humans. At the same time, human traits could be transferred into animals, blurring the imaginary line separating us from other species.
“Ultimately, human beings may be forced to answer the question of whether being human is what is important, or rather whether continuing to be a person possessed of certain morally valuable traits,” write the authors. “Perhaps in a century, the term ‘human being’ will be an object of nostalgia rather than a single moral category.”
Image Credit: http://www.shutterstock.com ">Shutterstock
Luke Skywalker wasn’t just a farmer. In the original 1977 Star Wars film, the lead character was desperate to leave his home planet of Tatooine, where his family farmed moisture from the atmosphere using devices called “vaporators." In the planet’s hot and dry desert landscape, moisture farming was an important activity for survival.
But could this principle of drawing moisture from the air to provide drinking water work in the real world? Researchers and I are working on technology to turn it from science fiction into reality. And now a http://science.sciencemag.org/content/early/2017/04/12/science.aam8743 ">new study has demonstrated how one device could work even in dry desert conditions using only the power of the sun.
If you sit in your garden on a hot, humid summer day with an iced glass of water, you will notice water droplets forming on the outside of the glass. The Star Wars vaporators on Tatooine may have worked using a similar principle. Cooling down warm air produces condensation, which can then be collected. Rain is actually a natural phenomenon of the same principle. When warm, humid air cools, it loses its capacity to maintain its water content and precipitation occurs in the form of raindrops.
Air naturally carries water vapor, and the warmer the air and the higher the relative humidity, the more water vapor it can carry. So technology that generates water from air is most suited to warm and humid climates. At 100% humidity, the air at 40℃ contains about 51 milliliters of http://www.engineeringtoolbox.com/maximum-moisture-content-air-d_1403.html ">water per cubic meter of air. For the same humidity at 10℃, the air contains only 9.3 milliliters.
If we cool that air from 40℃ to 10℃, we should be able to extract that water difference, which is 41.7 milliliters for each cubic meter of air. Under these conditions with current technology, we could produce 147 liters of water per hour using about the same energy as 18 domestic electric kettles.
At lower humidity, such as in a desert, there is less water in the air and so the system will be less efficient. You have to cool more air to extract the same quantity of water and that requires more energy. This can make the current technology too expensive for countries where water shortages are most severe. What you need is a more efficient way of capturing water vapor.
The simplest way of drawing water from air is with passive technology that provides a cool surface for fog or water vapor to condense onto. The selection of material and surface quality are critical for maximizing water collection. For example, farmers in Chile use a steel mesh to catch water from fog. https://www.newscientist.com/article/mg22229754.400-fog-catchers-pull-water-from-air-in-chiles-dry-fields/ ">Researchers have shown this can be made more efficient by adding a special coating that attracts water molecules.
Then there are active cooling technologies such as a refrigeration cycle similar to the one we use in air conditioning systems and refrigerators. You can also use solid-state thermoelectric cooling, which involves something called the https://van.physics.illinois.edu/qa/listing.php?id=19853 ">Peltier effect.
In 1834, the French physicist Jean-Charles-Athanase Peltier discovered an interesting phenomenon. If you run a current through a circuit made from copper and bismuth metal wires, a temperature rise occurs at the point where the current passes from copper to bismuth and a temperature drop occurs where the current passes from bismuth to copper. This means by consuming electrical energy, we can provide a cooling effect without any fluids or moving parts.
But scientists from the Massachusetts Institute of Technology (MIT) have now demonstrated another technology that could be even more efficient, using something called metal-organic frameworks powered by natural sunlight. The technology, http://science.sciencemag.org/content/early/2017/04/12/science.aam8743 ">described in the journal Science, uses a network of metal and organic molecules that can easily trap water vapor, which is then released using heat captured from the sun.
It has been reported that one kilogram of this material can harvest 2.8 liters of water a day at relative humidity levels as low as 20% without any other external power source. This makes it a particularly promising technology for harvesting water in arid or desert regions of the world.
Another alternative is to use simpler cooling technology but reduce the cost of it. My team and I have been developing a water-from-air system using old fridges and freezers, in addition to other recycled components such as an old computer fan and a mobile phone charger. We hope to create a https://www.theguardian.com/sustainable-business/how-to-make-water-from-air-old-fridges ">low-cost system for developing countries that also reduces waste in developed countries, particularly when solar panels are used to power the system.
Future work in this area would include using special surface coatings to provide a non-stick surface such as the surface of waterlilies for water droplets to be easily collected to create a more efficient system, in addition to ongoing http://science.sciencemag.org/content/early/2017/04/12/science.aam8743 ">metal-organic frameworks research. Another challenge is air pollution. In some parts of the world, special filtering and treatment might be needed to make the captured water safe to drink.
But this technology is moving fast. Who knows, in the future, we might not have to travel to Tatooine to see a Star Wars vaporator in action.
This article was originally published on http://theconversation.com ">The Conversation. Read the https://theconversation.com/new-technology-brings-star-wars-style-desert-moisture-farming-a-step-closer-76183 ">original article.
Banner Image Credit: https://theconversation.com/new-technology-brings-star-wars-style-desert-moisture-farming-a-step-closer-76183 ">Amin Al Habaibeh
An organic diet has never been more in style than it is right now, with millions of consumers willing to shell out extra dollars for organic foods. Most of us have a vague idea that organic is better because it’s more natural and free of genetically modified organisms (GMOs) and pesticides.
But what does “natural” even mean? The line is harder to draw than we may think.
https://ourworldindata.org/world-population-growth/ ">Earth’s population has more than doubled since 1960, and the http://www.un.org/en/development/desa/news/population/2015-report.html ">UN estimates it will reach 9.7 billion by 2050. GMOs already play a role in feeding extra mouths, and if we let it, that role may grow. Yet they are also still a source of controversy, and there are both valid concerns and misconceptions.
How different is food from GM crops as compared to food from non-GM crops?
Humans have been “genetically modifying” plants and animals for thousands of years. Five hundred years ago, say a farmer noticed some corn was a little sweeter. To replicate that flavor, the farmer might select those seeds for the next crop. That new trait came about by random genetic mutation, and establishing a noticeably sweeter flavor using selective breeding would take years, if not decades.
Genetic engineering does much the same thing—discovering and introducing genes that yield desired traits—but in a faster and more accurate way than selective breeding.
Some GM foods, like http://sitn.hms.harvard.edu/flash/2015/insecticidal-plants/ ">BT crops, are engineered to contain a form of pesticide, which means they don’t need to be sprayed with chemical pesticides. Eating food that produces a pesticide sounds scary, but as the video notes, pesticide doesn’t always mean it is inedible or harmful to humans. Many substances harm insects or animals, but not humans—coffee is one example.
And there are examples of pesticide-resistant GMOs having a tangible positive impact on people. When eggplant farmers in Bangladesh began to get sick from using too much chemical pesticide, for example, they implemented BT and were able to reduce pesticide use by 80 percent.
Much of the backlash against GMOs is less about genetic engineering and more about the business practices of the corporations that control our food supply. GMO crops have been a money-maker for herbicide companies—and as crops have been modified to be herbicide-resistant, herbicide use increases. For companies making GMO seeds and associated herbicides, that’s a lot of power over something as critical as how we feed ourselves.
And perhaps we need to be particularly careful when it comes to genetically modified anything, to thoroughly vet it for harm to humans and ecosystems. Once the genie’s out of the bottle, many worry we might not be able to get it back in again.
— Singularity Hub (@singularityhub) https://twitter.com/singularityhub/status/853625829305274368 ">April 16, 2017
As we continue to confront and sort out the ethics of it all, however, we can’t neglect the potential good that genetic engineering may bring. We might even look beyond pests and weeds in the future. Plants could be engineered to produce more nutrients to improve our diet or to be more resilient to climate change, or even to protect the environment instead of just reducing agriculture’s impact on it.
GMOs are part of the larger genetic engineering debate, which is only going to intensify. New techniques are getting easier, cheaper, and more precise by the year. Tech can do damage or be a force for good; the real trick is weighing risk and benefit impartially and making choices that steer us in the right direction.
Image Credit: https://www.youtube.com/watch?v=7TmcXYp8xu4 ">Kurzgesagt/YouTube
https://www.wired.com/2017/04/hybrid-jet-finally-make-electric-flight-reality/ " target="_blank">Zunum's Hybrid Jet Could Finally Make Electric Flight a Reality
Eric Adams | WIRED
"As crazy as it sounds, the aviation industry finds itself fascinated by electric airplanes, which require less fuel and make less noise than conventional aircraft. So far, though, the technology remains hobbled by the limitations of battery technology."
PRIVACY & SECURITY
https://www.theatlantic.com/technology/archive/2017/04/the-steady-rise-of-digital-border-searches/522723/ " target="_blank">The Steady Rise of Digital Border Searches
Kaveh Waddell | The Atlantic
"In the last six months, nearly 15,000 travelers had one of their devices searched at the border. Compare that to just 8,503 between October 2014 and October 2015, or 19,033 the following year...The agency says the steady increase in searches reflects 'current threat information,' but a spokesperson wouldn’t elaborate on the specific reasons for the trend."
https://www.fastcodesign.com/90109672/this-google-ai-turns-your-bad-doodles-into-delightful-clipart ">This Google AI Turns Your Bad Doodles Into Polished Drawings
Katherine Schwab | Fast Company
"This week, Google released a new AI experiment called AutoDraw, which turns your half-baked scribbles into poster-ready clipart. The tool uses machine learning to guess what you’re trying to draw and then gives you the option to replace your bad drawing with more polished ones."
https://www.technologyreview.com/s/604127/electrodes-for-your-face-bring-your-emotions-to-augmented-and-virtual-reality/ " target="_blank">Electrodes for Your Face Bring Your Emotions to Augmented and Virtual Reality
Rachel Metz | MIT Technology Review
"Called Mask, Tadi sees it as a way to bring natural-looking grimaces, smiles, and eyebrow raises to virtual characters without adding much bulk to headsets. Making it easier for users to express emotions—and interact with each other—in virtual reality could encourage more people to try it out, he thinks, and make it more effective."
https://www.engadget.com/2017/04/12/tarzan-the-swinging-robot/ " target="_blank">Tarzan the Swinging Robot Could Be the Future of Farming
Mariella Moon | Engadget
"Tarzan will be able to swing over crops using its 3D-printed claws and parallel guy-wires stretched over fields. It will then take measurements and pictures of each plant with its built-in camera while suspended...While it may take some time to achieve that goal, the researchers plan to start testing the robot soon."
Image source: https://www.shutterstock.com " target="_blank">Shutterstock