Energy harvesting technologies for Internet of Things (IoT) and Military internet of things (MIOT)

The Internet-of-Things is an emerging revolution in the ICT sector under which there is shift from an “Internet used for interconnecting end-user devices” to an “Internet used for interconnecting physical objects that communicate with each other and/or with humans in order to offer a given service”. There are several reports including from CISCO that predict that by the year 2020 there could be in excess of 20 billion connected IoT devices or network of everyday objects as well as sensors that will be infused with intelligence and computing capability.

The military  is also adopting  IoT technologies. Analogous to IoT, Military internet of things (MIOT) has been defined that comprises of multitude of platforms, ranging from ships to aircraft to ground vehicles to weapon systems.

Being based on Wireless Sensor Networks (WSNs), IoT applications demand for smart, integrated, miniaturised and low-energy wireless nodes, typically powered by non-renewable energy storage units (batteries). The latter aspect poses constraints as batteries have a limited lifetime and often their replacement is impracticable.  The battlefield environment and hence Military Internet of things  are constrained by power consumption.

Military IoT devices are likely to be powered by batteries or solar power, and charged on-the-move from solar panels, trucks, or even by motion while walking. In either case, they should last for extended periods of time (at least for the duration of the mission). Therefore, devices and sensors need to be power-efficient.

Researchers are considering multiple ways to solve the energy challenge of Internet of Things and Military IoTs.  One of solution is to develop low-power  and  energy efficient electronics.  Fraunhofer’s researchers and many other including DARPA, are focussing on the development of ‘wake-up receivers’. These devices use ultra-low currents to monitor wireless sensor networks and only fire up components from a sleeping state when they are required to handle an incoming request or instruction.

Researchers are exploring Energy harvesting technologies to tackle the issues with solar power like in extreme cold when the batteries fail to hold a charge, and in heavy shade the panels don’t operate. Some of the energy harvesting technologies being researched are thermoelectric, piezoelectric, electromagnetic, wireless power transfer and multi modal versions.

First battery-free cellphone makes calls by harvesting ambient power

Researchers at the University of Washington in the US demonstrated a batteryless smartphone which can run on power harvested either from ambient radio signals or light. According to Shyam Gollakota, associate professor in the university’s Paul G Allen School of Computer Science & Engineering, this could be the first functioning mobile phone to consume almost zero power: “To achieve the really, really low power consumption that you need to run a phone by harvesting energy from the environment, we had to fundamentally rethink how these devices are designed,” he says.

The battery-free cellphone takes advantage of tiny vibrations in a phone’s microphone or speaker that occur when a person is talking into a phone or listening to a call. An antenna connected to those components converts that motion into changes in standard analog radio signal emitted by a cellular base station. This process essentially encodes speech patterns in reflected radio signals in a way that uses almost no power.

To transmit speech, the phone uses vibrations from the device’s microphone to encode speech patterns in the reflected signals. To receive speech, it converts encoded radio signals into sound vibrations that that are picked up by the phone’s speaker. In the prototype device, the user presses a button to switch between these two “transmitting” and “listening” modes.

The team designed a custom base station to transmit and receive the radio signals. But that technology conceivably could be integrated into standard cellular network infrastructure or Wi-Fi routers now commonly used to make calls.

The battery-free phone does still require a small amount of energy to perform some operations. The prototype has a power budget of 3.5 microwatts. The UW researchers demonstrated how to harvest this small amount of energy from two different sources. The battery-free phone prototype can operate on power gathered from ambient radio signals transmitted by a base station up to 31 feet away. Using power harvested from ambient light with a tiny solar cell — roughly the size of a grain of rice — the device was able to communicate with a base station that was 50 feet away.


Soldiers Power Up with Energy-Harvesting Wearable Devices

Researchers at the Natick Soldier Research, Development and Engineering Center are working to develop wearable energy-harvesting technology solutions including wearable solar panels as well as backpack and knee kinetic, energy-harvesting devices to reduce weight and the quantity of batteries soldiers required to power their devices.

MC-10’s photovoltaic, Solar Panel Harvester cover a soldier’s backpack and helmet, are constructed from thin gallium arsenide crystals that provide flexibility to the panel’s material and allow it to conform to a soldier’s gear. Under bright sunlight conditions, with the PV panel facing the sun, the backpack panel is capable of delivering 10 watts while the helmet cover panels provides seven watts of electrical power.

Kinetic energy is also captured from the backpack’s oscillation device, as the backpack is displaced vertically; a rack attached to the frame spins a pinion that, in turn, is attached to a miniature power generator. It is capable of producing 16 to 22 watts while walking, and 22 to 40 watts while running.

An articulating knee device also capitalizes on this technology by recovering kinetic energy during the actions of flex and rest changing knee positions.


Army grant for energy harvesting backpack

An Army grant of more than $344,000 has been awarded to Lei Zuo, associate professor and John R. Jones III Faculty Fellow of Mechanical Engineering, to create a backpack energy harvester. The technology, which is expected to weigh about one pound with a harvesting capacity of 5-20 watts, will lead to lighter packs for military members, decreased supply chain requirements, and fewer muscular and skeletal injuries caused by heavy packs, improving the overall health of the soldier.

“By using mechanical motion rectifier (MMR), a technology converting oscillatory vibration motion into unidirectional rotation and scaling it down, we will work to create a device that sits on the frame of a soldier’s pack and harvests energy to recharge batteries as the soldier walks,” said Zuo. “This work builds on my previous work in energy harvesting.”

In the same way that ocean waves drive the MMR as they approach and depart an ocean energy-harvesting buoy, the backpack technology works to gather power as a soldier’s pack moves up and down as the soldier walks, with the multidirectional motion of walking converted into the unidirectional rotation of a generator. “Because the generator rotates at a steady speed with higher efficiency, it provides higher energy conversion efficiency and enhanced reliability over packs with conventional rack pinion systems,” Zuo said. “More important, the MMR motion will change the dynamics of a suspended backpack and enable it to harvest more electricity with less human metabolic cost.”


Energy Harvesting technologies

Energy Harvesting (EH), also known as Energy Scavenging (ES), literally means to extract energy from the surrounding environment and convert it into electric power. The ambient scattered energy can typically be attributed to four different sources:



The common vibration to electric power conversion methods for EH-MEMS are basically three: piezoelectric, electromagnetic and electrostatic. It was demonstrated that piezoelectric EH-MEMS can reach output power levels in the range of 10–50 µW for typical environmental vibrations, or even of more than 100 µW for large accelerations . An issue to be addressed concerns sensitivity enhancement of EHs in vibration frequency ranges available in the environment, typically up to 2–4 kHz. Non-linear elastic behaviour  and multi-modality  seem two promising strategies to extend the operability of MEMS vibrating EHs, , writes  JacopoIannacci


Thermal energy

Pyroelectric materials such as AlN generate electrical charges on their surfaces when undergo temperature changes. Although AlN is widely characterised against its electrical, mechanical and piezoelectric properties for actuation and sensing applications in MEMS/NEMS, i.e. NanoElectroMechanical-Systems, devices, only few publications discuss pyroelectric properties. Successful operation of thermoelectric EHs in combination with phase change materials powered wireless nodes in aircrafts .


Ambient light

Beside innovative EH-MEMS, high power densities are achievable with commercial transducers such as miniaturised Photovoltaic (PV) cells . The advantage is the availability of manageable voltages in very limited footprints. Since research in this field is rather mature, the current activities in developing zero-power HW platforms that encompass PV cells are based on the incorporation of EHs as Commercial Off The Shelf components (COTS).


EM and RF

RF-based EH consists in converting into DC power the ambient RF energy (e.g. digital TV, 3G-4G, WiFi). The main challenge is to provide ultra-compact devices able to operate with high efficiency in wide dynamic range of the illuminating RF power in a multiband and multi-polarisation environment. Recently, research on hybrid (RF plus solar) and conformal (on Polyethylene Terephthalate – PET substrate) EHs was initiated.

This has to be extended by hybridisation with heterogeneous EHs (RF plus piezoelectric, RF plus thermoelectric, etc.) and by coupling EH with Wireless Power Transfer (WPT) technique. There are essentially two types of WPT: by near-field coupling or by far-field. Low-power WPT techniques can be used as an alternative way for powering Cyber-Physical Systems (CPSs) when too low or no energy can be harvested from other sources.

The ambient light provides the highest power levels to be harvested. However, the situation changes drastically when passing from outdoor to indoor environments, as the harvested power density drops from 10 mW/cm2 down to 10 μW/cm2. This makes indoor ambient light source comparable with vibration/motion and thermal energy sources in their less favourable context, i.e. human (4 μW/cm2 and 30 μW/cm2, respectively).


ULP electronics for power conversion, management and storage

Next challenges for EH are miniaturisation and integration with active electronics, opening up the floor to massive exploitation of semiconductor and Microsystem technologies. Shrinking down dimensions from macro- to micro-domain brings a dowry of pros and cons. First, devices’ footprint scaling down means reducing harvested power, it looking at first sight as a limiting constraint. Nonetheless, if supply of low-power and ULP electronics is less and less demanding (as mentioned above), on the other hand, development of micro-fabrication technologies enables to enhance EHs’ conversion efficiency.

Given the concurrent growing need for integrated and miniaturised wireless sensors/actuators nodes, capable of energy autonomy and multiple functionalities, as well as provided with more on-board smart capabilities, in recent years MEMS, i.e. MicroElectroMechanical-Systems (EH-MEMS), and semiconductor-based EH has been attracting significant attention in the research and scientific community and is now indicated as an enabling technology for the IoT paradigm.

Design of µ-power converters with high efficiencies and ultra-low intrinsic power consumption is of extreme interest. Some works investigated Complementary Metal Oxide Semiconductor (CMOS) technology as a viable solution for different types of sources. A µ-power converter for thermoelectric EHs was designed to consume less than 2 µW . Further reductions to the sub-µW range are significantly boosting applications of EH.

Some works in literature focus on CMOS multi-source power converters for EH, as e.g. in Bandyopadhyay and Chandrakasan, where heterogeneous transducers are handled with a few µW power consumption. An example of integration of microelectronic substrates, MEMS, and µ-packaging is reported in Aktakka and Najafi. There is a growing interest for electronic interfaces operating in the sub-threshold region, where minimum voltage remains currently limited to a few hundreds mV. The harvested energy should also be efficiently stored in low-volumes and made available upon user application demands. Super-capacitor technology and nanostructured electrochemical batteries hold the promise of significant improvements.



References and resources also include:

New Strategic Race to develop on demand Space-Based Solar power anywhere in the world

The increasing energy demands to support continued (and sustainable) global economic growth, thrust for renewable power because of environmental/climate concerns, has inspired researchers to look for fundamentally new energy technologies. Space-based solar power (SBSP) — in which Miles-long satellites covered  with solar panels capture the Sun’s radiation, convert it to electricity and then transmit it back to Earth in the form of either microwaves or lasers could form  basis of  unlimited, renewable electricity.

That power could also  used in space to meet the energy demands of future space mining and resource extraction operations. NASA  is examining  how space solar power could support robotic mining operations on the moon or asteroids–a stepping stone toward enabling long-term human space exploration and possible colonization of the solar system beyond Earth.

There is race between countries like  US, Japan,  China and Russia  who have all made huge investment in this area, and the space departments of India, South Korea, and Europe are also conducting related research.

Li Ming, research fellow of China Academy of Space Technology (CAST) has claimed that China now holds a leading position in the research of space-based solar power after decades of research which has narrowed the gap between itself and leading countries. Space-based solar power will ease environmental and energy pressure in China, and also spur the country’s innovation and emerging industries, Wang added.

The US Military has also become interested in this concept as it would save their billions in fuel costs as well as provide ultimate flexibility in their expeditionary missions as solar power could be redirected anywhere on the planet. The SPS would also be useful for disaster missions, a thin, portable rectenna can be unfolded and deployed to receive microwaves from space, which can be converted into electrical energy.

Ralph Nansen from the US-based advocacy group Solar High, urges the US to act on this because he believes that whoever develops SBSP first, will have a monopoly position in the world economy, just like England did during the industrial revolution because of coal.

Space Based Solar power (SBSP)

Even as early as 1960s, Dr. Peter Glaser of Arthur D. Little invented Solar Power Satellite (SPS) concept: a large platform, positioned in geostationary orbit in space continuously collects sunlight, converts it into microwaves or laser beams, and transmits these to the ground; and a power receiving facility on the ground, converts it into electricity and hydrogen for practical use.

In space there is ten times more available solar energy than on Earth, there are no efficiency reductions due to the day-night cycle, seasonal variation, or weather conditions. Paul Jaffe, spacecraft engineer at the U.S. Naval Research Laboratory, declared that if there are solar panels in space, they are going to be illuminated 24 hours a day, seven days a week, 99% of the year. Due to the fact that Earth’s axis of rotation is tilted, the solar satellite could pick up sunlight almost all the time. Jaffe continues by explaining that individual space-based solar arrays would be able to produce 250 megawatts, and go up to 5 gigawatts of energy.  As an example, Jaffe has given New York City, which needs around 20 gigawatts of power. By his calculations, the system required would consist of four arrays, each providing 5 gigawatts, thus being able to power the entire city.

However, the construction has long been a challenge for scientists, because its weight and size are way beyond the current carrying-capacity of spacecraft.  Many studies and experiments in the past have found the concept to be too costly  in terms of space transportation, billions of dollars to send a rig that could be several kilometers across and weigh several thousand tons;  and too complex to assemble a structure in space which is ten times the size of the International Space Station that itself is about the size of a football field.

Researchers estimate that lightweight designs of space solar panels could produce 1 kW per kilogram, thus requiring 4,000 metric tons of solar panels to produce 4 gigawatts of power. Energy captured in space-based solar panels would be transmitted back to Earth based antennas wirelessly.

“It’s a lot of money to put one of these things up in space,”  said Ian Lange, assistant professor of economics and business and director of the Mineral and Energy Economics Program, “You need more than a model that says nuclear power is 15 cents per kilowatt hour and this is 14.”

In addition a broad range of technical challenges must be addressed in order to establish the economic feasibility of SPS like synchronizing the phases of microwaves produced by more than billion antennas, that would be installed on a single SPS, to produce a single precisely focused beam; the efficiency degradation of wireless power transmission (WPT) due to diffraction of energy through the water vapor absorption, the need of very light materials for the mirror structures to allow for the formation flight. Power generation, and power management, including extremely high-voltage power transmission cables that could channel the power from the solar panels to the transmission unit with minimal resistive losses.

However, falling costs of space launches – Musk’s company plans to slash the cost of launching into space to $1,100/kg ($500/lb) from currently $20,000/kg ($10,000/lb) through reusable launch vehicles, improvement of the efficiency of solar cells from 10 to 40% over the last four decades, advancements in space robotics, development of new lightweight materials – including graphene and advanced polymers have brought back the interest in the concept of SPS once again.
The International Academy of Astronautics recently stated that space-based solar power would be viable within 30 years.



Japan, where the disastrous Fukushima meltdown heightened the search for safe, sustainable alternative energy, is  also looking at space based space power.  Japan Aerospace Exploration Agency (JAXA), which leads the world in research on space-based solar power systems, now has a technology road map that suggests a series of ground and orbital demonstrations leading to the development in the 2030s of a 1-gigawatt commercial system—about the same output as a typical nuclear power plant.

JAXA has already demonstrated wireless microwave transmission of solar power in space by beaming 1.8 kilowatts of electricity via microwave transmission 55 meters to a pinpoint target on a receiver. The microwave was successfully converted into direct electrical current at the receiving end. The experiment was conducted in March 2015.

If implemented, microwave-transmitting solar satellites would be set up approximately 35,000 kilometers from Earth. Jaxa says that a receiver set up on Earth with an approximately 3-kilometer, or 1.9-mile, radius could create up to one gigawatt of electricity, which is about the same as one nuclear reactor.

It will be many years before that happens, if it ever does. Researchers “are aiming for practical use in the 2030s,” Yasuyuki Fukumuro, a researcher at Jaxa, said on its website.

While the energy is transmitted in the same microwaves used in microwave ovens, it doesn’t fry a bird or an airplane traveling on its path because of its low-energy density, according to the Jaxa spokesman



Taking space-based solar power as a key research program since 2008, China has made a number of major breakthroughs in wireless energy transmission and proposed various energy-collecting solutions.

“China will build a space station in around 2020, which will open an opportunity to develop space solar power technology,” Li Ming, vice president of the China Academy of Space Technology, was quoted as saying to the Xinhuan news agency. China Association for Science and Technology (CAST) revealed more details of a 100kW SBSP demonstration, which it plans to put in low earth orbit is expected by 2025, followed by a fully-operational SBSP system in geostationary orbit by 2050.
The project, which is still in the conceptual stage, would involve a satellite that weighs more than 10,000 lbs., dwarfing anything previously placed into orbit, including the International Space Station, according to the China-based Xinhuanet, part of the Xinhua News Agency.

The world has recognized the need to replace fossil fuels with clean energies. However, the ground-based solar, wind, water and other renewable energy sources are too limited in volume and unstable. “The world will panic when the fossil fuels can no longer sustain human development. We must acquire space solar power technology before then,” Wang, an academician of the Chinese Academy of Sciences (CAS) and a member of the International Academy of Astronautics, says.
“Whoever obtains the technology first could occupy the future energy market. So it’s of great strategic significance,” Wang says.

“Construction of a space solar power station will be a milestone for human utilization of space resources. And it will promote technological progress in the fields of energy, electricity, materials and aerospace,” says Wang.

“We need a cheap heavy-lift launch vehicle,” says Wang, who designed China’s first carrier rocket more than 40 years ago.
“We also need to make very thin and light solar panels. The weight of the panel must be less than 200 grams per square meter.”

He also points out that the space solar power station could become economically viable only when the efficiency of wireless power transmission, using microwave or laser radiation, reaches around 50 percent.



A division of the Russian Federal Space Agency (Roscosmos) revealed that it has a working prototype of a 100kW SBSP system in development; although no launch date was announced.


US Military 

The U.S. Naval Research Laboratory (NRL) is building a “sandwich” module; the top side is a photovoltaic panel that absorbs the Sun’s rays. An electronics system in the middle converts the energy to a radio frequency, and the bottom is an antenna that transfers the power to a target on the ground. Ultimately, the idea is to assemble many of these modules in space by robots — something the NRL’s Space Robotics Groups is already working on — to form a one kilometer, very powerful satellite.


3D printing in space

3D printing has been developed at a fast pace in recent years, It is thought that by sending up special 3D printers into space to manufacturer the solar panels in orbit, the installation costs can be drastically reduced, compared to sending up pre-made solar panels. In 2014 an astronaut on the International Space Station used a 3D printer to make a socket wrench in space, hinting at a future when digital code will replace the need to launch specialized tools into orbit.


NASA / LaRC “SpiderFab” for automated on-orbit construction

Company called Tethers Unlimited (TUI) is currently developing architecture and a suite of technologies called “SpiderFab” for automated on-orbit construction of very large structures and multifunctional space system components, such as kilometer-scale antenna reflectors.

This process will enable space systems to be launched in a compact and durable ’embryonic’ state. Once on orbit, these systems will use techniques evolved from emerging additive manufacturing and automated assembly technologies to fabricate and integrate components such as antennas, shrouds, booms, concentrators, and optics.

Under a NASA/LaRC Phase I SBIR contract, TUI is currently implementing the first step in the SpiderFab architecture: a machine that uses 3D printing techniques and robotic assembly to fabricate long, high-performance truss structures. This “Trusselator” device will enable construction of large support structures for systems such as multi-hundred-kilowat solar arrays, large solar sails, and football-field sized antennas.

The development of economically viable SPS now depends more on the availability of adequate budgets; finally the vision of a ring of satellites in orbit to provide nearly unlimited energy for the earth’s needs may become reality.


References and Resources also include

NASA Seeks Industry’s Concepts for Solar Electric Propulsion for Deep Space Exploration Mission-2 near the moon

NASA is leading the next steps into deep space near the moon, where astronauts will build and begin testing the systems needed for challenging missions to deep space destinations including Mars. The area of space near the moon offers a true deep space environment to gain experience for human missions that push farther into the solar system, access the lunar surface for robotic missions but with the ability to return to Earth if needed in days rather than weeks or months.

The agency published a Request For Information (RFI) July 17 to capture the U.S. industry’s current capabilities and plans for spacecraft concepts that potentially could be advanced to provide power and advanced solar electric propulsion (SEP) to NASA’s deep space gateway concept. Solar electric propulsion typically refers to the combination of solar cells and ion drive for propelling a spacecraft through outer space. This technology has been studied by NASA and is considered promising. The main concept is a nexus of Solar panels on spacecraft and ion thruster.

NASA is examining a lunar-orbiting, crew-tended spaceport concept that would serve as a gateway to deep space. In addition to the power propulsion element, the gateway would include a habitat to extend Orion crew time, a docking capability, and would be serviceable by logistics modules to enable research and replenishment for deep space transport infrastructure.

NASA is in the early stages of acquisition planning with the goal of developing a flight unit payload to launch on the agency’s second integrated mission of the Space Launch System rocket and Orion spacecraft.

“Through the RFI, we hope to better understand industry’s current state-of-the-art and potential future capabilities for deep space power and propulsion,” said Michele Gates, director of the Power Propulsion Element at NASA Headquarters in Washington. “With the upcoming BAA, we will fund industry-led studies to identify the most urgent areas for focus over the next several years, for the benefit of human spaceflight, as well as commercial applications.”

One of the Technology Area (TA) of NASA’s Mars roadmaps is In-Space Propulsion Technologies that addresses the development of higher-power electric propulsion, nuclear thermal propulsion, and cryogenic chemical propulsion. Improvements derived from technology candidates within this TA will decrease transit times, increase payload mass, provide safer spacecraft, and decrease costs.


Deep Space Gateway and Deep Space Transport

Under a program dubbed Deep Space Gateway, agency officials said they still plan to use the lunar orbit as a staging platform to build and test the infrastructure and the systems needed to send astronauts to Mars. But instead of breaking off a chunk of asteroid and dragging it to the moon, NASA’s new plan calls for building an orbiting spaceport that could have even more uses.

The second phase of missions will confirm that the agency’s capabilities built for humans can perform long duration missions beyond the moon. For those destinations farther into the solar system, including Mars, NASA envisions a deep space transport spacecraft.

This spacecraft would be a reusable vehicle that uses electric and chemical propulsion and would be specifically designed for crewed missions to destinations such as Mars. The transport would take crew out to their destination, return them back to the gateway, where it can be serviced and sent out again. The transport would take full advantage of the large volumes and mass that can be launched by the SLS rocket, as well as advanced exploration technologies being developed now and demonstrated on the ground and aboard the International Space Station.


NASA to fly ion thruster on Mars orbiter

An ion thruster is a form of electric propulsion used for spacecraft propulsion. It creates thrust by accelerating ions with electricity. As the ionised particles escape from the aircraft, they generate a force moving in the other direction. Power supplies for ion thrusters are usually electric solar panels, but at sufficiently large distances from the sun, nuclear power is used.


Michael Patterson, senior technologist for NASA’s In-Space Propulsion Technologies Program compared ion and chemical propulsion with “Tortoise and the Hare”. “The hare is a chemical propulsion system and a mission where you might fire the main engine for 30 minutes or an hour and then for most of the mission you coast.” “With electric propulsion, it’s like the tortoise, in that you go very slow in the initial spacecraft velocity but you continuously thrust over a very long duration — many thousands of hours — and then the spacecraft ends up picking up a very large delta to velocity.”


The NASA Glenn Research Center has been a leader in ion propulsion technology development since the late 1950s, the NASA Solar Technology Application Readiness (NSTAR) ion propulsion system enabled the Deep Space 1 mission, the first spacecraft propelled primarily by ion propulsion, to travel over 163 million miles and make flybys of the asteroid Braille and the comet Borelly.


NASA Glenn recently awarded a contract to Aerojet Rocketdyne to fabricate two NEXT flight systems (thrusters and power processors) for use on a future NASA science mission. In addition to flying the NEXT system on NASA science missions, NASA plans to take the NEXT technology to higher power and thrust-to-power so that it can be used for a broad range of commercial, NASA, and defense applications.


NASA Works to Improve Solar Electric Propulsion for Deep Space Exploration

NASA has selected Aerojet Rocketdyne, Inc. of Redmond, Washington, to design and develop an advanced electric propulsion system that will significantly advance the nation’s commercial space capabilities, and enable deep space exploration missions, including the robotic portion of NASA’s Asteroid Redirect Mission (ARM) and its Journey to Mars.


“Through this contract, NASA will be developing advanced electric propulsion elements for initial spaceflight applications, which will pave the way for an advanced solar electric propulsion demonstration mission by the end of the decade,” said Steve Jurczyk, associate administrator of NASA’s Space Technology Mission Directorate (STMD) in Washington. “Development of this technology will advance our future in-space transportation capability for a variety of NASA deep space human and robotic exploration missions, as well as private commercial space missions.”


Aerojet Rocketdyne will oversee the development and delivery of an integrated electric propulsion system consisting of a thruster, power processing unit (PPU), low-pressure xenon flow controller, and electrical harness. NASA has developed and tested a prototype thruster and PPU that the company can use as a reference design.


The company will construct, test and deliver an engineering development unit for testing and evaluation in preparation for producing the follow-on flight units. During the option period of the contract, if exercised, the company will develop, verify and deliver four integrated flight units – the electric propulsion units that will fly in space. The work being performed under this contract will be led by a team of NASA Glenn Research Center engineers, with additional technical support by Jet Propulsion Laboratory (JPL) engineers.


The first operational test of an electric propulsion system in space was Glenn’s Space Electric Rocket Test 1, which flew on July 20, 1964. Since then, NASA has increasingly relied on solar electric propulsion for long-duration, deep-space robotic science and exploration missions to multiple destinations, the most recent being NASA’s Dawn mission. The Dawn mission, managed by JPL, surveyed the giant asteroid Vesta and the protoplanet, Ceres, between 2011 and 2015.


The advanced electric propulsion system is the next step in NASA’s Solar Electric Propulsion (SEP) project, which is developing critical technologies to extend the range and capabilities of ambitious new science and exploration missions. ARM, NASA’s mission to capture an asteroid boulder and place it in orbit around the moon in the mid-2020s, will test the largest and most advanced SEP system ever utilized for space missions.


NASA’s First Launch of SLS and Orion

NASA is hard at work building the Orion spacecraft, Space Launch System (SLS) rocket and the ground systems needed to send astronauts into deep space. The agency is developing the core capabilities needed to enable the journey to Mars.

Orion’s first flight atop the SLS will not have humans aboard, but it paves the way for future missions with astronauts. During this flight, currently designated Exploration Mission-1 (EM-1), the spacecraft will travel thousands of miles beyond the moon over the course of about a three-week mission. It will launch on the most powerful rocket in the world and fly farther than any spacecraft built for humans has ever flown. Orion will stay in space longer than any ship for astronauts has done without docking to a space station and return home faster and hotter than ever before.

This first exploration mission will allow NASA to use the lunar vicinity as a proving ground to test technologies farther from Earth, and demonstrate it can get to a stable orbit in the area of space near the moon in order to support sending humans to deep space, including for the Asteroid Redirect Mission. NASA and its partners will use this proving ground to practice deep-space operations with decreasing reliance on the Earth and gaining the experience and systems necessary to make the journey to Mars a reality.


Request for Information (RFI)

The Power and Propulsion Element (PPE) is the first planned element in the Deep Space Gateway (DSG) concept and would launch as a co-manifested payload with the Orion crewed vehicle on the Space Launch System (SLS) on Exploration Mission-2.

This NextSTEP Appendix C, targeted for release in the August 2017 timeframe, will seek proposals for areas necessitating further study for this specific application of advanced solar electric propulsion (SEP). Studies are anticipated to be brief (3-4 month duration) with succinct products to assist in the development of the PPE concept and approach.

Studies intend to address key drivers for PPE development such as but not limited to potential approaches to: meeting the intent of human rating requirements; concept and layout development; attitude control; propulsive maneuverability; power generation; power interface standards; power transfer to other Gateway Elements; hosting multiple International Docking System Standard (IDSS) compatible docking systems; batteries/eclipse duration; 15 year lifetime; communications; avionics, assembly integration and test approaches; extensibility; accommodations of potential (international or domestic partner provided) hardware such as robotic fixtures, science and technology utilization and other possible elements; and options for cost share/cost contributions.


NASA may also request assessment of impact of acquiring high power, high throughput SEP strings as part of the commercial bus, rather than through a Government Furnished Equipment route.


PPE Reference Capability Descriptions

  • The PPE will have a minimum operational lifetime of 15 years in cis-lunar space.
  • The PPE will be capable of transferring up to 24kW of electrical power to the external hardware.
  • The PPE will be capable of providing orbit transfers for a stack of TBD mass with a center of gravity of TBD.
  • The PPE will be capable of providing orbit maintenance for a stack of TBD mass with a center of gravity of TBD.
  • The PPE will have 2,000 kg-class tank Xenon capacity
  • The PPE will be compatible with the SLS vehicle co-manifested launch loads on the Exploration Mission -2 (EM-2) flight.




References and resources also include:

Wind power is set to play pivotal role in the world’s future energy supply enabled by New Breakthroughs

World  signed the COP21 climate deal in Paris, that implies a steadily rising penalty on carbon emissions. During the recent climate conference in Paris, 70 countries highlighted wind as a major component for their emissions-reduction schemes. “By 2020, wind power could prevent more than 1 billion tonnes of carbon dioxide from being emitted each year by dirty energy – equivalent to the emissions of Germany and Italy combined,” said Sven Teske, Greenpeace senior energy expert.

The demand of wind power is predicted for big growth in the future, Wind power will account for 14% of the world’s primary energy supply — one percentage point above solar PV — by mid-century.  Wind will also provide 36% of world electricity generation by 2050, with two-thirds of this generation coming from onshore projects, according to the company’s Energy Transition Outlook report, which was released in sep 2017.

Wind is a emerging as a reliable and inexpensive source of renewable energy. Globally, the average cost of wind is $83 per megawatt-hour compared to averages for coal and gas being $84 and $98 respectively. “In the USA, gas is slightly cheaper than wind but this is the only large economy where that is the case. As a comparison, solar photovoltaic energy averages $122 globally for each MW-hour,” said Giles Dickson who is CEO of the European Wind Energy Association (EWEA). Wind power’s costs will tumble by 16% as capacity doubles over the next 33 years, while the cost of solar PV is set to fall by 18% over this timescale.

The global wind power leaders as at end-2015 are China, United States, Germany, India and Spain.


The World’s First Floating Wind Farm Is Now Producing Energy

 Floating wind farms far out at sea hold a lot of promise for future energy generation. Wind turbines  can be packed too densely far out in the sea than  on land or near the coast,  as the drag effect  that causes less wind to flow  is less pronounced far in the sea. That means it’s possible to extract six megawatts per square kilometer rather than the 1.5 achieved on land using the same turbines.  Analysis , to the extent that three million square kilometers of floating wind turbines could supply the entire world’s current energy demand.

Hywind Scotland, situated in Buchan Deep, is the world’s first floating wind farm, with its five six-megawatt turbines now generating electricity. On shore, a one-megawatt-hour lithium battery also helps smooth its potentially erratic supply of electricity to the grid. It’s also a concept that’s catching on elsewhere, with a scheme similar to the Scottish project under consideration in California.

The project, which is a collaboration between the Norwegian oil firm Statoil and Masdar Abu Dhabi Future Energy, makes use of turbine towers that are 253 meters tall, with 78 meters of that submerged in the North Sea. Each tower is tethered using three cables that are anchored to the seabed.

Buchan Deep project cost a total of $263 million to complete. It currently receives $185 per megawatt-hour of subsidies from the British government, on top of the $65 per megawatt-hour it earns for the wholesale price of the electricity it creates. In other words, it’s damned expensive. Statoil says that it hopes floating wind farms could produce energy for between $50 and $70 per megawatt-hour by 2030.

Harnessing wind energy high up in the sky

“Wind turbines on the Earth’s surface suffer from the very stubborn problem of intermittent wind supply,” said   KAUST atmospheric scientist Udaya Bhaskar Gunturu, in a release put out by the university. This has led researchers and energy companies worldwide to look upwards and explore the possibility of the strong and reliable winds at high altitudes.

Flying a wind turbine on a kite — with the electricity being delivered to the ground through its tether — may seem an unlikely scenario, but several companies worldwide are already testing prototype systems.

Tethered kites could potentially offer the flexibility to vary the altitude of the turbines as wind conditions change. Current technology would most likely allow harvesting wind energy at heights of two to three km, but there is also a lot of wind even higher than that. The researchers found that the most favourable regions for high-altitude wind energy in West Asia are over parts of Saudi Arabia and Oman.

 Commercial tankers using sail power to navigate the seas could be the wave of the future.

Norsepower Oy Ltd, a Finnish engineering and technology company in partnership with Maersk Tankers, The Energy Technologies Institute and Shell Shipping & Maritime, announced in March the installation and testing of Flettner rotor sails onboard a Maersk Tankers vessel.

The project, which will be the first installation of wind-powered energy technology on a product tanker vessel, would provide insights into fuel savings and operational experience. The rotor sails will be fitted during the first half of 2018, before undergoing testing and data analysis at sea until the end of 2019.

Maersk Tankers will supply a 109,647 ton Long Range 2 product tanker, which will be retrofitted with two 98 feet tall by 16 feet in diameter Norsepower Rotor Sails. The design would look like narrow smoke stacks. Combined, these are expected to reduce average fuel consumption on typical global shipping routes by 7-10 percent.

The Norsepower Rotor Sail is a modernized version of the Flettner rotor — a spinning cylinder using the Magnus effect to harness wind power to propel a ship. Each Rotor Sail is made using the latest intelligent lightweight composite sandwich materials. When wind conditions are favorable, the main engines can be throttled back, providing a net fuel cost and emission savings, while not impacting scheduling.

Tuomas Riski, CEO of Norsepower, said in a release: “As an abundant and free renewable energy, wind power has a role to play in supporting the shipping industry to reduce its fuel consumption and meet impending carbon reduction targets.”

 Challenges  of   Wind power

The intermittency and variability of the wind resource, and hence of wind turbine output, pose challenges to the integration of wind power generation to the existing electricity network. Intermittent generation will be evident at site level, but due to geographical diversity will reduce when generation is considered over larger areas (such as country or regional level). Hence, the intermittency of wind generation can be reduced significantly if the power outputs of wind farms over a specific area are aggregated together.

University of Delaware researchers report in a new study that offshore wind may be more powerful, yet more turbulent than expected in the North Eastern United States. The findings, published in a paper in the Journal of Geophysical Research: Atmospheres, could have important implications for the future development of offshore wind farms in the U.S., including the assessment of how much wind power can be produced, what type of turbines should be used, how many turbines should be installed and the spacing between each.

The paper’s main finding is that atmospheric conditions around Cape Wind are predominantly turbulent, or unstable, which is in stark contrast to prevailing data from European offshore wind farms in the Baltic Sea and the North Sea. Explaining how wind can be stable, unstable or neutral is a tricky business, Archer says.” When the atmosphere is stable, winds are smooth and consistent (think of when a pilot tells airline passengers to sit back and enjoy. When the atmosphere is unstable, it is similar to turbulence experienced by airline passengers during a flight—the wind is choppy and causes high winds from above and slow winds from below to crash into each other and mix together, causing a bumpy and unpredictable ride for the air current.” Neutral conditions hover in the middle, with an average amount of turbulence and wind speed variation.

An expert in designing offshore wind farms, Archer says the findings may have implication on how future offshore wind farms in the region are designed. “The advantage of these turbulent conditions is that, at the level of the turbines, these bumps bring high wind down from the upper atmosphere where it is typically windier. This means extra wind power, but that extra power comes at a cost: the cost of more stress on the turbine’s blades,” explains Archer. “If you have increased turbulence, you’re going to design a different farm, especially with regard to turbine selection and spacing. And guess what? Even the wind turbine manufacturing standards are based on the assumption of neutral stability,” Archer says.

 Tech innovations could cut offshore wind energy costs by a third by 2030

The levelised cost of energy from offshore wind farms in Europe could be reduced by as much as a third by 2030 if a range of technological innovations such as larger turbines and more efficient rotors are deployed.That is the conclusion of a new report released last week by sustainable energy technology investor KIC InnoEnergy and technical consultancy BVG Associates. The study used KIC InnoEnergy’s offshore wind cost model to analyse the extent to which 51 innovations could help cut the cost of wind energy through changes to design, hardware, software or processes.

The changes included the introduction of mass-produced support structures for use in deeper water with larger turbines, using bespoke vessels and equipment capable of operating in a wider range of conditions, and the use of more upfront investment in wind farm development to improve site investigations and engineering studies.

Two-thirds of the estimated cost savings were found to be achievable through just nine areas of innovation, such as improvements in blade aerodynamics and optimising the layout of arrays. The innovation with the largest potential impact on cost reduction was increasing turbine size from 4MW to 10MW, the analysis found, since using fewer turbines leads to significant savings in the cost of foundations, construction, and operations.



A wind turbine’s blades convert kinetic energy from the movement of air into rotational energy; a generator then converts this rotational energy to electricity. The wind power that is available is proportional to the dimensions of the rotor and to the cubing of the wind speed. Theoretically, when the wind speed is doubled, the wind power increases by a factor of eight.

Turbines have aerodynamic ‘smart’ blades made of carbon composite with wireless sensors, and can ‘pitch’ in and out of the wind in response to shifts in air flow. “There has been a huge leap forward in technology even over the last couple of years. They are pushing the boundaries of energy capture,” said Cian Cornroy from the offshore experts ORE Catapult in Glasgow. “They are using new metals in the generators that cut the need for servicing. There are cameras to relay digital data through cloud computing that can reset the turbines. You have to be bullish,” he said.

Direct-drive eliminates the gearbox, and could be crucial in removing the limiting size and weight of future turbines of 10 MW and beyond. Hybrid drive systems have simpler and more reliable gearing than conventional solutions with three stages of
gearing, while having a similar generator size.

Remote electronic controls are continually being incorporated into turbine design. In addition to pitch control and variable speed operation, individual turbines and whole farms may perform wind measurements remotely, using turbine-mounted technology such as lidar (LIght Detection and Ranging) and sodar (SOnic Detection and Ranging). The real-time data realised from remote sensing will optimise wind production as turbines constantly pitch themselves to the incoming wind

The reliability of a wind turbine in generating power is indicated by the availability of the turbine, which is the proportion of time the turbine is ready for operation. Onshore turbines typically have availabilities of 98%, while offshore turbine availabilities are slightly lower (95-98%) but are improving due to better operation and maintenance

In 2015, a paper from the Department of Energy (DOE) suggested that increasing rotor diameter and height is the best way to access more power from wind turbines, even in areas with lower wind speeds.

A study was conducted by researchers from Berkeley Lab, the National Renewable Energy Laboratory (NREL), University of Massachusetts,  found that wind power cost could be reduced by 24 to 30 percent by 2030 based on the advances in turbine technology that are either projected or already being seen today. Those experts said that by 2030, both onshore and offshore wind turbines will get bigger, leading to additional cost reductions and smoother energy generation.

In 2015, onshore wind turbines averaged a hub height of 82m, a rotor diameter of 102m, and a power output of 2 MW. In 2030, experts on average suggest that onshore wind turbines will have a hub height of 115m, a rotor diameter of 135m, and a power output of 3.25 MW.

Offshore, the story is more dramatic. Where today’s turbines have a hub height of 90m, a rotor diameter of 119m, and a power output of 4.1 MW on average, 2030’s offshore wind turbines will measure 125m and 190m in hub height and rotor diameter, respectively, and output an insane 11 MW on average—each.

Wind Lens

The Wind Lens is the brainstorm of researchers at Kyushu University that would generate more than traditional wind power using a unique design. The Wind Lens focuses airflow just like a lens focusing light.

The circle made up of the turbine blades has a ring that curves inward, and this directs the flow of air, and accelerates the speed. The team leader states that by using an inlet shroud, diffuser and brim in the inward ring, these cause the air to be drawn in more quickly. This means that it generates more power. The researchers have claimed that using this new wind turbine technology will allow turbines to triple their output, while even reducing the noise that the turbines cause.

Wind Lens holds great promise for Japan as a source of green renewable energy. Since Japan is an island, it will be able to make full use of offshore wind farms, since that is where researchers feel the new technology will perform the best. The Wind Lens can float on platforms shaped like hexagons, and at sea will not be subject to large waves or tsunamis, since these achieve their destructive power only upon nearing a shoreline.


The Vortex Bladeless Micro Wind Turbine

The startup Vortex Bladeless is developing a micro wind turbine shaped like a pole-like structure without blades or other moving parts. Vortex Bladeless relies on an aerodynamic phenomenon called vorticity, in which wind flowing around a structure creates a pattern of small vortices or whirlwinds. When these mini-whirlwinds get large enough, they can cause a structure to oscillate, and turbine converts this mechanical energy into electricity.

However, the individual structure will only oscillate at particular frequencies. The Vortex have developed a “magnetic coupling system” that results in broadening the range of frequencies and allows maximization of generation of energy. The microturbine can automatically vary rigidity and “synchronize ” with the incoming wind speed, in order to stay in resonance without any mechanical or manual interference.

The plus side is turbine’s ultra-slim silhouette that could enable it to fit into all sorts of tight spaces where larger turbines can’t, however the main point of contention is the cost effectiveness of micro wind turbines. The initial product line consists of two models, a 1-megawatt Gran and a 4-kilowatt Mini. France’s Eiffel Tower recently got a full on green makeover, including a pair of high visibility vertical micro wind turbines embedded in the tower itself.

Wind Farm

A wind farm is a group of wind turbines in the same location used to produce electricity. A large wind farm may consist of several hundred individual wind turbines and cover an extended area of hundreds of square miles, but the land between the turbines may be used for agricultural or other purposes. A wind farm can also be located offshore.

The wind farm technology has also become very sophisticated and efficient, world’s biggest offshore wind farm is to be built 75 miles off the coast of Grimsby, at an estimated cost to energy bill-payers of at least £4.2 billion. The giant Hornsea Project One wind farm will consist of 174 turbines, each 623ft tall generating 1.2 gigawatt  capable of powering one million homes.


 GE bringing industrial Internet to wind farms

General Electric Co. has announced a new wind farm technology that will improve output by 20 percent — providing the wind power industry with $50 billion in added revenue. “It’s a huge breakthrough for renewable energy and specifically wind power,” Bolze told the Times Union during a telephone interview. “The world wants more wind power. Same wind, 20 percent more electrical output. That’s huge.”

Steve Bolze, the CEO of GE Power & Water, said the new product — called the Digital Wind Farm — has been in development for the past 18 months and combines the company’s two-megawatt wind turbines with GE modeling software, sensors and the industrial Internet, which allows machines to exchange data, or “talk” to one another.

Integral to achieving all this has been the development of more precise, accurate, robust, and responsive wind energy forecasting algorithms, Grid-scale batteries built into the turbines and real-time wind turbine networking, and power management. Industrial internet communicates with grid operators, to predict wind availability and power needs, and helping to manage wind’s variability and provide smooth, predictable power.


Breakthrough Magnetic Alloy Could Lead To Cheaper Cars, Wind Turbines

Scientists have created a promising new magnetic material that could lead to cheaper cars and wind turbines. The new magnetic alloy is a viable alternative to expensive rare-earth permanent magnets, the U.S. Department of Energy and Ames Laboratory reported. The material could eliminate the need for one of the “scarcest and costliest” rare Earth elements, dysprosium, and replace it with abundant cerium.

The alloy is composed of neodymium, iron and boron “co-doped “with cerium and cobalt. Recent experiments demonstrated the cerium-containing alloy boasts intrinsic coercivity (the ability of magnetic material fight demagnetization) that is even greater than dysprosium’s containing magnets of high temperatures. This material is also between 20 and 40 percent cheaper than magnets containing conventional dysprosium.

“This is quite exciting result; we found that this material works better than anything out there at temperatures above 150 [degrees Celsius],” said study leader Karl A. Gschneidner. “It’s an important consideration for high-temperature applications.” Past attempts to use cerium in rare-earth magnets were unsuccessful because the element reduces the Curie temperature (the temperature at which an alloy loses its magnetic properties). This new co-doping method coupled with cobalt allowed the scientists to substitute cerium for dysprosium without reducing the magnetic properties of the material.




References and Resources also include:

DARPA’s N-ZERO extends the lifetime of IoT devices and remote sensors from months to years

Today U.S. soldiers are being killed because the Defense Department cannot deploy all the sensors it would like to. DoD could  deploy sensors every few yards to detect  buried  improvised explosive device (IED). As it is, every sensor deployed today has to be battery powered, so even if vast sensor nets were deployed it would put more soldiers in jeopardy by forcing them to expose themselves to ambush attacks while changing sensor batteries.

By 2018 the DARPA’s N-Zero initiative aims to have deployable sensor networks that require near-zero standby-power, a goal the team quickly found that was impossible without microelectromechanical systems (MEMS). In addition the teams discovered an extra benefit of MEMS — an advantage the team had never imaged possible. MEMS provides not just near-zero standby power, but can be configured for absolute zero standby power by using the power from the signal to be detected itself to power-up the transmitter. And in some situations, the transmitter too can be powered without a battery, by storing up energy on a super-capacitor from renewable sources — from solar to vibration harvesters.

The Department of Defense has an unfilled need for persistent, event-driven sensing capabilities, where physical, electromagnetic and other sensors can remain dormant, with near zero-power consumption, until awakened by an external trigger or stimulus. Current state-of-the-art sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to months or less.

The N-ZERO program intends to extend the lifetime of remotely deployed communications and environmental sensors from months to years, by supporting projects that demonstrate the ability to continuously and passively monitor the environment, waking an electronic circuit only upon the detection of a specific trigger signature. Specifically, N-ZERO seeks to extend unattended sensor lifetime from weeks to years, cut costs of maintenance and the need for redeployments. Alternatively, N-ZERO could also reduce battery size for a typical ground-based sensor by a factor of 20 or more while still keeping its current operational lifetime.

“We wanted to learn how to reduce our sensors power envelope so that we could deploy them right at the tactical edge with a battery that does not need to be replaced for a long period of time,” said DARPA program manager Roy (Troy) Olsson in his keynote address titled Event Driven Persistent Sensing.

A team of researchers at Northeastern University have developed a new sensor powered by the very infrared energy it’s designed to detect. The device, which was commissioned as part of DARPA’s Near Zero Power RF and Sensor Operation (N-ZERO) program, consumes zero standby power until it senses infrared (IR) wavelengths. The sensor shall have many military applications that can detect vehicles  and tanks and even identify them  weather it is a truck, a car, or an aircraft by detecting heat emitted by them in IR spectra and analysing  the heat or IR signature which is different because of  engines that burn gasoline or diesel fuels produce emissions made up of different chemical compounds.

Requirement of  new technologies to power IoT and wireless sensor networks

DARPA’s N-ZERO program can also enable the future billions of Internet of Things (IoT) devices that shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’.“What we can do today really doesn’t fulfill the vision of the Internet of Things,” Troy Olsson, DARPA’s N-ZERO program manager, told SIGNAL. “We can either connect devices that have power already, like your refrigerator, or devices that you can recharge every day or every couple of days, like a cellular phone. You can connect and interconnect those, and some people call that the Internet of Things.” For Olsson, true IoT will involve sensors everywhere that are untethered from either a power supply or from having to be recharged constantly

To power future billions of Internet of Things (IoT) shall require billions of batteries to be purchased, maintained, and disposed of. Energy harvesting presents the best alternative for large-scale self-contained IoT is ambient energy sources. State-of-the-art (SOA) sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to durations of months or less. In addition, it increases the cost of deployment, either by necessitating the use of large, expensive batteries or by demanding frequent battery replacement. It also increases Warfighter exposure to danger. Researchers have evolved many approaches to tackle the energy consumption of battery powered devices. Wireless sensor network standards have been specifically designed to take into account the scarce resources of nodes.

Sensor devices are made up of sensing capabilities, communicating components and data processing. The sensor nodes gather information or detect special events and send the data to the base station to be processed. Radio module is the main component that causes battery depletion. To reduce energy dissipation due to wireless communications, researchers have tried to optimise radio parameters such as coding and modulation schemes, power transmission and antenna direction.

Another category of solutions aims to reduce the amount of data to be delivered to the sink. Two methods can be adopted jointly: the limitation of unneeded samples and the limitation of sensing tasks because both data transmission and acquisition are costly in terms of energy.

Idle states are major sources of energy consumption at the radio component. Sleep / wakeup schemes aim to adapt node activity to save energy by putting the radio in sleep mode.

DARPA seeks to transform the energy efficiency of these unattended sensors through elimination or substantial reduction of the standby power consumed by the sensors while they await a signature of interest. The improved energy efficiency is expected to result in an increase in the sensor mission lifetime from months to years.



DARPA’s N-ZERO program

The program intends to exploit the energy in the signal signature itself to detect and discriminate the events of interest while rejecting noise and interference. N-ZERO program intends to develop the underlying technologies and demonstrate the capability to continuously and passively monitor the environment, and wake-up an electronic circuit upon detection of a specific trigger signature. Thus, sensor lifetime will be limited only by processing and transmission of confirmed events, or ultimately by the battery self-discharge.

The N-ZERO program has three phases. The first, which ended December 2016, took 15 months to complete. The second and third phases will each take one year. Some research teams achieved goals in the program’s first phase that they were expected to reach much later.

Ultimately, the goal of the N-ZERO program is to design, build, and test intelligent sensors and microsystems that exploit the energy in, and the unique features of, a signature of interest to process and detect the signature’s presence, reject noise and interference, while consuming less than 10 nW. The goal is to use less than 10 nanowatts (nW) during the sensor’s asleep-yet-aware phase—an energy drawdown roughly equivalent to the self-discharge (battery discharge during storage) of a typical watch battery, and at least 1,000 times lower than state-of-the-art sensors.

It should also attain a low false alarm rate of 1 per hour or better in an urban environment. Upon detection of a signal having the signature of interest, the N-ZERO component devices must produce a logic state capable of waking up commercial-off-the-shelf (COTS) electronics for further (post wake-up) processing and signal communication.

There are two primary challenges for the N-ZERO program in developing an “OFF-but-Alert” sensor technology. The first challenge is to close the sizable gap between the extremely small signal levels measured by RF and physical sensors and the relatively large threshold voltages required by state-of-the-art comparators. N-ZERO aims to bridge that gap without supplying any active power (≤ 10 nW) in the standby state when the signatures of interest are absent.

The second challenge is the discrimination of the events or signatures of interest from noise and interference, without supplying active power. The critical technologies created by the N-ZERO program are intended to establish methods to provide large passive voltage gain, develop passive signal processing circuits to prevent false detection, and realize comparators operating at extremely low threshold voltages with near zero power consumption enabled by steep sub-threshold swing. This tri-prong approach is intended to result in microsystems capable of detecting and processing signals with near zero power consumption (≤ 10 nW).

DARPA has been able to create zero-power receivers that can detect very weak signals — less than 70 decibel-milliwatt radio-frequency (RF) transmissions, a measure that is better than originally expected. The system has also been able to detect objects correctly without raising a false alarm, which can crimp battery life. In the program’s current phase, the sensors need to distinguish between cars, trucks and generators in an urban environment at a close range and in the final phase, they will be required to classify those same targets from 10 meters (33 feet).

“The ability to sense and classify cars, trucks and generators in … both rural and urban backgrounds from a distance of a little over 5 meters away and being able to do that with almost 10 nanowatts of power consumption is a big accomplishment in phase one of the program,” Olsson says.


DARPA to develop new IR-based sensor technology

A team of researchers at Northeastern University in Boston will develop a sensor powered by IR energy, as part of DARPA’s Near Zero Power RF and Sensor Operation (N-ZERO) programme.

DARPA Microsystems Technology Office N-ZERO Program manager Troy Olsson said: “What is really interesting about the Northeastern IR sensor technology is that, unlike conventional sensors, it consumes zero stand-by power when the IR wavelengths to be detected are not present.

“When those IR wavelengths are present and impinge on the Northeastern team’s IR sensor, the energy from the IR source heats the sensing elements which, in turn, causes physical movement of key sensor components. These motions result in the mechanical closing of otherwise open circuit elements, thereby leading to signals that the target IR signature has been detected.”

The IR sensor technology features multiple sensing elements, each of which is adapted to absorb a specific IR wavelength.

These elements combine into complex logic circuits that are capable of analysing IR spectrums, which allow sensors to detect IR energy in the environment and specify if that energy derives from a fire, vehicle, person or some other IR source.

The sensor also includes a grid of nanoscale patches whose specific dimensions limit them to absorb only particular IR wavelengths, DARPA stated.

Northeastern University Electrical and Computer Engineering associate professor Matteo Rinaldi said: “The charge-based excitations, called plasmons (that can be thought of somewhat like ripples on the surface of water), are highly localised below the nanoscale patches and effectively trap specific wavelengths of light into the ultra-thin structure, inducing a relatively large and swift spike in its temperature.”

 DARPA Award Funds Richard Shi’s Work to Develop New Low-Power Sensors

Defense Advanced Research Projects Agency (DARPA) grant, University of Washington, EE Professor Richard Shi will be developing specialized sensors that are able to operate with minimal power and remain dormant until triggered.

Through the Near Zero Power RF and Sensor Operations (N-ZERO) program, Shi will develop specialized sensors that are capable of continuously and passively monitoring the environment, with the ability to fully activate in response to specific triggers. Current sensors consume power continuously, which in turn limits the sensor lifetime to months. Expensive batteries must also be frequently replaced.

In addition to current sensors consuming power continuously, a considerable amount of energy is also used by the electronic devices that communicate with the sensors. Therefore, the project will also entail developing radio receivers that are capable of being activated by a radio frequency trigger. Similar to the sensors, this will enable the radio receivers to expand power only when useful information is communicated.


DARPA awards $1.8 million for ‘near-zero’ power sensors at UC Davis

The U.S. Defense Advanced Research Projects Agency (DARPA) has presented a $1.8 million grant to a project headed by David Horsley, a professor in the UC Davis Department of Mechanical and Aerospace Engineering. The project, “Ultralow Power Microsystems Via an Integrated Piezoelectric MEMS-CMOS Platform,” includes the participation of co-PIs Xiaoguang “Leo” Liu and Rajeevan Amirtharajah, both professors in the UC Davis Department of Electrical and Computer Engineering.

Horsley’s group has teamed up with InvenSense, the company that makes the motion sensors — gyro and accelerometer — in everybody’s smart phones. “DARPA likes to have technology that can be translated into a practical application,” Horsley said. “One strength of our program is that we’re working directly with a high-volume manufacturer, so the chips we are designing are being made at a production facility and can be rapidly transitioned to production for DoD use at the end of the program.”

The program goal is to develop an acoustic sensor and an acceleration sensor that run on near-zero power, producing a wake-up signal when a particular signature is detected: say, a car or truck driving by, or a generator being switched on. “But we don’t have to be able to distinguish between any of those vehicles,” Horsley noted. “In Phase Two, however, we will have to be able to say, ‘This was a truck’ or ‘This was a car.'”

Horsley said the sensors are “kind of like having the ultimate geophone, where you’re sensing for earthquakes, sensing vibrations in the earth.”

“We have sensors that we’re testing now that are running at below 10 nanowatts,” he said. By way of comparison, the existing sensors in smart phones, although already operating on low power, nonetheless require about 10 milliwatts: roughly 1 million times more power than the sensors being developed by Horsley’s team.

“At the end of Phase One, which will be coming up toward the end of this year, we’re going to deliver the hardware,” Horsley said. “We have a very-low-power acceleration sensor and a microphone that we’re going to deliver to the government, and they’re going to have this independently evaluated at Lincoln Lab at MIT.”

Horsley believes that in the not-too-distant future an ultra-low-powered remote sensor could be triggered by events other than ground noise. One could, for example, have a microphone that’s on all the time listening for a specific keyword. “So one vision for this technology is that … you wouldn’t have to fire up a processor, like an applications processor, or get connected to the cloud to be able to have it do keyword recognition,” Horsley said. “That’s pretty far from where we are now, but it certainly seems like we’re in the right direction to get there.”


References and Resources also include:

Plethora of Battery breakthroughs to power future consumer electronics, smart homes, electric vehicles and Military Missions

Rechargeable lithium-ion batteries have been workhorse of  the consumer electronics market including portable electronics, implantable devices, power tools, and hybrid/full electric vehicles (EVs) due to their ability to store large amounts of energy per unit weight and per unit volume, low self-discharge rate, long cycle life.  They are also relatively maintenance-free and contain fewer toxic chemicals than other batteries.

Recently South Korean giant Samsung  called for complete re haul of its latest flagship device- Galaxy Note 7 after consumers around the globe reported their handsets exploding and causing damages. Because of these safety issues Researchers have developed many promising new battery chemistries to replace Lithium-ion.  The increasing cost and limited resources of lithium may restrict their further application and hence demands urgent  development of the low-cost batteries based on new energy storage chemistries.

New batteries are also required to satisfy the increasing demands of high-performance, energy storage devices to power electric vehicles, smart homes, smart phones, and even smart wearables.

Batteries are also critical for military missions since mission success and soldiers’ lives often depend directly on a military battery’s performance.  The expected improvements in energy density may enable advances in directed energy weapons, increase the loiter time of unmanned vehicles, lead to more effective sensors, and reduce the size and weight of manportable

We’re on the verge of a power revolution with plethora of battery discoveries coming into commercial domain soon. Tech companies and car manufacturers are pumping money into battery development.  Some of the promising batteries are Lithium-air breathing and Aluminum Air batteries, Gold nanowire batteries, titanium dioxide anode battery, Silicon and Germanium Nanowire, Solid state and Graphene batteries.

However there is need to develop efficient manufacturing processes, enhance durability and safety and reduce the costs before consumers start using these non-traditional batteries. Research firm IDTechEx estimates that advanced and post-lithium-ion battery technologies will achieve a market value of $14bn in 2026, comprising about 10 per cent of the entire battery market.

Military Requirements for Batteries

“Batteries enable radio communication among combat squad members and field headquarters. They provide the power to obtain accurate location data essential for maneuver and combat air support. Laser range finders and night-vision goggles are two more examples of battery-powered capabilities that give U.S. troops battlefield superiority,” says RAND.

“This variety in battery applications leads to variety in the types of batteries that the military acquires; a battery cell designed to periodically provide small amounts of power to a flashlight is built differently than a large, one-shot cell inside a missile, which may lie dormant for many years and then be expected to provide a large amount of power at a moment’s notice.”

For military purposes, however, battery cost is secondary to dependability. Therefore military is willing to pay somewhat higher prices to ensure that its batteries will be effective in combat situations and rugged environments. In addition to requirements for characteristics of battery performance, requirements are established for battery survivability in harsh conditions.

Because the rate at which chemicals within a battery react is dependent on temperature, reactions proceed more quickly as temperature increases. This means that, for military batteries, great care has to be taken to ensure that the power generated in a cold-weather environment is sufficient to meet a soldier’s needs.

Because a battery is an energy storage device, by definition a good battery contains large amounts of energy in a confined space. Safety, therefore, is a paramount concern, and it must be established that Soldier Portable batteries (SPBs) will fail gracefully, i.e., without damaging other components of an electrical system or posing a danger to operators. Graceful failure must hold in the face of many different types of possible abuse: Requirements cover the testing necessary to establish the battery’s response to explosive decompression, submersion, thermal and mechanical shock, sand and dust storms, and numerous other environmental hazards that a military battery might encounter during its service life.

Improving current Lithium-ion Batteries

Lithium-ion battery (LIB) consists of a graphite electrode (anode), an electrolyte (usually a lithium salt), and a metal oxide electrode (usually an oxide containing lithium). Lithium-ion batteries have large energy density of 372 mAh/g, hundreds of cycle of durability, and are routinely packed into mobile phones, laptops and electric cars. According to Frost & Sullivan, a leading growth-consulting firm, the global market of rechargeable lithium-ion batteries is projected to be worth US$23.4 billion in 2016

Toyota after  observing the behavior of lithium ions in an electrolyte  when a battery charges and discharges, have found the reason why a battery ages.Toyota’s battery-boffins expect to use the new observation method to develop batteries that hold a better charge, and lead a longer life. Once the breakthrough is commercialized, which could take “two to three years,” a new lithium ion battery could improve the battery-powered range of an electric vehicle by 15%, Dr. Hisao Yamashige of Toyota’s advanced R&D and engineering division told a small group of reporters this morning at the company’s Tokyo HQ.

For widespread adoption of electric cars, their range needs to be increased which require dramatic improvement in battery energy density and cycle durability as well as decreasing their cost. These next generation of batteries that are able to fully charge more quickly, and produce 30%-40% more electricity than today’s lithium-ion batteries, could help transform the electric car market, allow the storage of solar electricity at the household scale and power the medical implantable devices.


Samsung hails ‘graphene ball’ battery success


Recently South Korean giant Samsung called for complete re haul of its latest flagship device- Galaxy Note 7 after consumers around the globe reported their handsets exploding and causing damages. Samsung in a statement said that it expects further $3 billion in lost income from to its move to scrap the fire-prone Galaxy Note 7 phone, raising the financial impact of the crisis to the equivalent of about half of its profit last year in the mobile division. One of the main causes of catching fire appears to continuous increase in energy density of Lithium-ion battery units driven by increased user requirements including full HD, large processing requirements of multi-core CPUs and increasing desire to produce sleeker design.

Recently, a team of researchers at the Samsung Advanced Institute of Technology (SAIT) developed a “graphene* ball,” a unique battery material that enables a 45% increase in capacity, and five times faster charging speeds than standard lithium-ion batteries. The breakthrough provides promise for the next generation secondary battery market, particularly related to mobile devices and electric vehicles.

In its research, SAIT sought for an approach to apply graphene, a material with high strength and conductivity to batteries, and discovered a mechanism to mass synthesize graphene into a 3D form like popcorn using affordable silica (SiO2). This “graphene ball” was utilized for both the anode protective layer and cathode materials in lithium-ion batteries. This ensured an increase of charging capacity, decrease of charging time as well as stable temperatures.

In theory, a battery based on the “graphene ball” material requires only 12 minutes to fully charge. Additionally, the battery can maintain a highly stable 60 degree Celsius temperature, with stable battery temperatures particularly key for electric vehicles.


Emerging New Battery Chemistries

Batteries are of two types, the primary batteries that are single use batteries, once they are discharged, they must be discarded. Rechargeable batteries are generally referred to as secondary batteries.

 Zinc Air

Zinc-air batteries have long-been thought of as a safer, cheaper and more sustainable replacement to lithium-ion. The heavy metal is one of the world’s most abundant and more environmentally friendly to extract than lithium, but since zinc-air batteries were first produced in the 1930s they have only been single use, powering devices like hearing aids.

However a team from the University of Sydney’s faculty of Engineering and IT have developed a new way to recharge the batteries using a three step method. Lead researcher Professor Yuan Chen says they found a way to control the composition, size and crystallinity of iron and cobalt, creating bifunctional oxygen electrocatalysts – in other words – react to increase and decrease the amount of oxygen needed to recharge the battery.

Their capacity to store five times more energy than current models makes the batteries suited for powering electric cars and other long lasting devices. Professor Chen said despite the excitement over the find, the technology will take time to perfect, with issues including a limited 120 recharge cycle compared to lithium-ion’s 400-1200 range, and a slow energy delivery speed.


China develops Zn and Mg Batteries Powered Low-Speed Electric Vehicles

Under the guidance of CAS academician CHEN Liquan, Qingdao Industrial Energy Storage Research Institute(QIESRI) form Qingdao Institute of Bioenergy and Bioprocess Technology (QIBEBT), Chinese Academy of Sciences, discovered that highly concentrated aqueous electrolytes can optimize the Zn stripping/deposition processes and creatively proposed a smart cooling-recovery function by using a thermoreversible hydrogel as the functional electrolyte, which can repair the interfacial failure during cycling (Angew. Chem. Int. Ed., 2017, 56, 7871; Electrochem. Commun., 2016, 69, 6; ACS Appl. Mater. Interfaces, 2015, 7, 26396).

Based on preliminary progress in basic research and the technological development, QIESRI had broken through the technical bottlenecks of Zn batteries in the pilot-scale research and successfully developed new types of Zn batteries with high safety, energy density up to 40 Wh/kg, cycling life up to 500 times and cost less than 0.7 ¥/Wh, which are promising for the applications in the LSEVs, large-scale energy storage and flexible electronic devices.

In addition, QIESRI made the first attempt to design and synthesize boron-centered-anion-based Mg-ion electrolytes characterized by high ionic conductivity, non-nucleophilicity, and wide electrochemical window. The formation energy and phase transformation of the discharged intermediates are extensively investigated to understand the Mg-ion storage mechanism (Adv. Energy Mater., 2017, 1602055; Small, 2017, 1702277; Electrochem. Commun., 2017, 83, 72; J. Mater. Chem. A, 2016, 4, 2277). These scientific advances provide potential benefits and new research directions for future low-cost secondary Mg batteries.

These prospective researches on the rechargeable Zn and Mg batteries are highly in compliance with the lead-free trend in the LSEVs and corresponding low-cost applications. It would make important technical contribution for the green development of industry in China, accordingto cas.


Lithium-air breathing batteries

Many companies and researchers are experimenting with new lithium-air batteries that are smaller, lighter, and more energy-efficient than their forebears. Scientists from the Massachusetts Institute of Technology, the Argonne National Laboratory, and Peking University in China revealed a promising new version of lithium-air batteries. Their new device, the scientists said, can serve as a drop-in replacement for lithium-ion, while storing over five times more energy as today’s batteries.

According to the scientists, the new design uses solid oxygen electrodes to overcome many of lithium-air’s drawbacks. This new battery loses much less energy in the form of heat than earlier versions. The result is that it lasts longer and is more energy-efficient, making it a better option for electric cars and renewable energy storage.

“This means faster charging for cars, as heat removal from the battery pack is less of a safety concern, as well as energy efficiency benefits,” said Ju Li, an MIT professor of nuclear science and engineering and author of the research.

Until now, lithium-air batteries inhaled outside air—driving a chemical reaction with the battery’s lithium—while electric current flows out. This oxygen is released to the atmosphere during the charging cycle. The chemical reaction produces other molecules, known as lithium peroxide, that slowly clog the battery electrodes.

That raises several problems. According to Li, the solid particles formed by the reaction cause the battery to degrade faster than lithium-ion batteries, which are completely sealed from outside air. When the battery degrades, it stores less energy.

The battery is also prone to losing energy in the form of heat. Its output is more than 1.2 volts lower than the voltage needed to charge it, causing it to lose 30% of the electricity as heat. Because of that, the battery can “actually burn if you charge it too fast,” said Li. Overcharging can lead to structural damage or an explosive reaction known as thermal runaway.

The new design solves these problems by closing off the battery to outside oxygen. The same electrochemical reactions take place between lithium and oxygen during charging and discharging, but they take place without ever using oxygen gas. Instead, the oxygen stays inside the battery and switches between three solid chemical compounds: Li2O, Li2O2, and LiO2. This prevents the damaging particles from forming.

The new battery sharply cuts voltage loss, so only 8% of the electrical energy is lost as heat. It also inherently guards against overcharging: The device can shift between different lithium compounds if it is being overcharged to stop activity that might cause damage. The scientists overcharged the battery to 100 times its capacity for 15 days without any damage. They also found that through 120 charging cycles, the battery only lost 2% of its capacity.

The lithium-air battery can also do without components to pump air inside and out of the battery. Without these auxiliary parts, it can easily be adapted to existing devices or battery packs inside cars and power grid storage.


Aluminum Air battery

In January 2015, Japanese company Fuji Pigment Co. Ltd. announced it was developing a new type of battery called an aluminum-air battery. It simply needs to be filled with saltwater or fresh water to charge, and its theoretical specific energy level is 8,100 Wh/kg (watt hour/kilogram). Compare it with commercial lithium-ion batteries, which have specific energy levels within 100-200 Wh/kg.It should last a hefty 14 days, according to its creators Fuji Pigment.

Aluminium-air batteries have a theoretical capacity more than 40 times greater than the lithium-ion cells. An aluminum-air battery generates electricity from the reaction of oxygen and aluminum, using water as an electrolyte. Furthermore, aluminum is abundant, commercially cheap and the most recycled metal in the world. As a result, aluminum-air batteries will be cheap.

A standard aluminium-air reaction consumes the aluminum anode, which must be physically replaced rather than electrically recharged. But Fuji Pigment claims that, by adding strategically placed layers of ceramic and carbon, it has managed to suppress corrosion and reaction by products, creating an aluminium-air battery that can be recharged multiple times by simply adding water.

Phinergy’s has created aluminum-air battery breakthrough by using a silver-based catalyst that only allows oxygen from the ambient air into the positive cathode. The O2 then combines with the liquid electrolyte, releasing the latent electrical energy stored within the aluminium anode. The ‘air cathode’, acts like the breathable fabric, letting in O2 but no carbon dioxide, which would foul the chemical reaction.

Lighter, more compact, with greater energy output and conceivably less than half the price of lithium-ion batteries, aluminum-air technology might transform EV appeal. Renault is the front-runner to adopt this game-changing aluminium-air battery, which could yield a sevenfold boost in the electric Zoe’s 130-mile range.


Gold nanowire batteries, the batteries that last a LIFETIME

A standard lithium-ion battery used in most smartphones is expected to have between 300 to 500 charge cycles in it before it starts to lose a sizeable chunk of capacity.The system designed by doctoral candidate Mya Le Thai can be cycled hundreds of thousands of times without wearing out, which could lead to a battery that never needs to be replaced.

Researchers replaced traditional lithium by gold nanowires which are thousands of times thinner than a human hair, have extremely high conductivity and surface area, making them ideal for the transfer and storage of electrons.

Nanowires, pose a great possibility for future batteries, but they become brittle after multiple charge cycles, resulting in tiny cracks that spread inside the battery. The team at UCI avoided that problem by coating gold nanowires in manganese dioxide, with a total thickness of just 300 nm. These were then encased in a gel called polymethyl-methacrylate (PMMA). The performance of these batteries declined only 5% after recharging over 200,000 times in three months. This could be ideal for future electric cars, spacecraft and phones that will never need new batteries


Ultrafast charging titanium dioxide anode battery

Researchers at Nangyang Technological University have developed a fast-charging titanium dioxide anode battery. The batteries can be recharged up to 70 per cent in only two minutes and also have a long lifespan of over 20 years, more than 10 times compared to existing lithium-ion batteries.

The titanium dioxide nanotubes were used as a gel that transfers electrons more efficiently than today’s graphite anodes, hence speeding up the charging process. It also delays deterioration, thereby multiplying the battery’s lifespan by six to seven times. The titanium dioxide batteries are not just great for smartphones, but for electric cars as well.

Another plus point for this new breed of batteries is the abundance of titanium dioxide in nature. It is a naturally occurring oxide found mainly in ilmenite and rutile ores, and many miners are dedicated to producing these minerals. One such promising mining company is White Mountain Titanium Corporation (OTCQB: WMTM), which operates its Cerro Blanco titanium project at Chile’s Atacama region. It is expected to produce as much as 112 million high-grade rutile tons, which can be used for the development of titanium dioxide anodes that could upgrade the lithium-ion batteries and finally help them keep up with the fast-changing times.

Manufacturing this new nanotube gel is very easy. Titanium dioxide and sodium hydroxide are mixed together and stirred under a certain temperature so battery manufacturers will find it easy to integrate the new gel into their current production processes.

Silicon Anodes

Silicon and Germanium are two candidates that can replace the traditional graphite anode to increase the energy density and cycle durability of lithium-ion batteries. Even though Silicon has the highest known theoretical specific capacity of any material (~3600 mAh/g), it has limited use as an anode in lithium ion batteries due to the mechanical instability caused by the large volume expansion that occurs upon Li insertion. Like silicon, germanium too doesn’t handle charging very well, it expands during charging and disintegrate after a small number of cycles.

A KAIST research team led by Professors Jang Wook Choi and Ali Coskun reported a molecular pulley binder for high-capacity silicon anodes of lithium ion batteries in Science in July 2017. The KAIST team integrated molecular pulleys, called polyrotaxanes, into a battery electrode binder, a polymer included in battery electrodes to attach the electrodes onto metallic substrates.

In a polyrotaxane, rings are threaded into a polymer backbone and can freely move along the backbone. The free moving of the rings in polyrotaxanes can follow the volume changes of the silicon particles. The rings’ sliding motion can efficiently hold Si particles without disintegration during their continuous volume change.

It is remarkable that even pulverized silicon particles can remain coalesced because of the high elasticity of the polyrotaxane binder. The functionality of the new binders is in sharp contrast with existing binders (usually simple linear polymers) with limited elasticity, since existing binders are not capable of holding pulverized particles firmly. Previous binders allowed pulverized particles to scatter, and the silicon electrode thus degrades and loses its capacity.


Silicon and Germanium Nanowire

Researchers have turned to using silicon nanowire and germanium nanowire anodes due to their advantages like efficient electron transport and larger surface area that further increases the battery’s power density, allowing for fast charging and current delivery.

Researchers at the University of California, Riverside (UCR) have developed a silicon anode for lithium-ion batteries that outperforms current materials and gets around previous issues. A research team led by professors Mihri and Cengiz Ozkan now developed an electrode consisting of sponge-like silicon nanofibers having several structural advancements at the nanometer scale that help with the minimization of undesired large volume expansion as observed in other standard Si materials,”

Research at University of California, Los Angeles, has shown that growing a SiO2 layer on silicon nanowires (SiNW) can improve cycle life to 400 cycles at a capacity of 2400 mAh/g. Canonical announced on July 22, 2013, that its Ubuntu Edge smartphone would contain a silicon-anode lithium-ion battery. Amprius currently makes silicon nanowires in a small-scale batch process using chemical vapor deposition (CVD), a process borrowed from the semiconductor industry.

A research team at the University of Limerick, Ireland, restructured germanium using nanowires to create a porous material that remains stable during charging. The anodes were claimed to retain capacities of 900 mAh/g after 1100 cycles, even at discharge rates of 20–100C.

This performance was attributed to a restructuring of the nanowires that occurs within the first 100 cycles to form a mechanically robust, continuously porous network.

In 2014, researchers at Missouri University of Science and Technology developed a simple way to produce nanowires of germanium from an aqueous solution. They modified the electrochemical liquid-liquid-solid process (ec-LLS), an electrodeposition process designed by a group of researchers at the University of Michigan, in order to grow nanowires of germanium using liquid metal electrodes at room temperature. Their one-step approach could lead to a simpler, less expensive way to grow germanium nanowires.

Supercapacitors based Battery will let phones charge in seconds and last for a week

A new type of battery that lasts for days with only a few seconds’ charge has been created by researchers at the University of Central Florida. The high-powered battery is packed with supercapacitors that can store a large amount of energy. It looks like a thin piece of flexible metal that is about the size of a finger nail and could be used in phones, electric vehicles and wearables, according to the researchers.

“If they were to replace the batteries with these supercapacitors, you could charge your mobile phone in a few seconds and you wouldn’t need to charge it again for over a week,” said Professor Nitin Choudhary, one of the researchers behind the new technology.

To date supercapacitors weren’t used to make batteries as they’d have to be much larger than those currently available. But the Florida researchers have overcome this hurdle by making their supercapacitors with tiny wires that are a nanometre thick. Coated with a high energy shell, the core of the wires is highly conductive to allow for superfast charging.

“For small electronic devices, our materials are surpassing the conventional ones worldwide in terms of energy density, power density and cyclic stability,” said Prof Choudhary. Cyclic stability refers to the number of times a battery can be fully charged and drained before it starts to degrade.


Solid state batteries

In solid-state batteries the liquid electrolytes normally used in conventional lithium-ion batteries are replaced with solid ones, which make it possible to replace conventional electrodes with lithium metal ones that hold far more energy. Doing away with the liquid electrolyte, which is flammable, can also improve the safety of batteries, which leads to cost and size savings, particularly in electric vehicles, by reducing the need for complex cooling systems.

The result is a battery that can operate at super capacitor levels to completely charge or discharge in just seven minutes – making it ideal for cars. Since its solid state that also means it’s far more stable and safer than current batteries. The solid-state unit should also be able to work in as low as minus 30 degrees Celsius and up to one hundred.

Researchers at Toyota and Tokyo Institute of Technology said they had developed solid-state batteries with more than three times the storage capacity of lithium-ion state batteries. Hitachi Zosen of Japan says it plans to commercialize the technology by 2020, but acknowledges it has yet to work out the manufacturing process.

Dyson, has invested $15 million in Sakti3, a Michigan-based developer of solid-state battery technology. Sakti 3 has successfully demonstrated a battery that produces 1,000 watt-hours of energy per liter of battery volume, which in practice could more than double the driving range of a current Tesla. Pathion Michael Liddle projects that solid-state battery technology will be market-ready within two years.

The real question is whether they can produce that affordably and at scale; Pathion’s CEO Michael Liddle says “Many startups and researchers can produce a better cathode, anode, or electrolyte, but all three must work together perfectly to make a battery. The capital to bring the pieces together, and bring production of new batteries to scale, has been scarce.”

Graphene car batteries

Fisker has betrothed his new electric automobile will have a range surpassing 400 miles — that would be huge, deliberation the longest operation now belongs to a high-end chronicle of the Model S, that gets 315 miles on a singular charge. Rather than operative with required lithium-ion batteries, Fisker is turning to graphene supercapacitors. Graphene is a thinnest element on Earth and strongest material famous to man.

Graphene batteries are the future. “Graphene shows a higher electron mobility, meaning that electrons can move faster through it. This will, e.g. charge a battery much faster,” Lucia Gauchia, an assistant professor of energy storage system at Michigan Technological University, told Business Insider. “Graphene is also lighter and it can present a higher active surface, so that more charge can be stored.

“The reason we are not using it yet, even though the material is not a new one, is that there is no mass production for it yet that can show reasonable cost and scalability,” Gauchia explained.

But Fisker told Business Insider that his battery division, Fisker Nanotech, is patenting a appurtenance that he claims can produce as many as 1,000 kilograms of graphene during a cost of usually 10 cents a gram.

“The plea with regulating graphene in a supercapacitor in the past has been that we don’t have a same firmness and ability to store as many energy,” Jack Kavanaugh, a conduct of Fisker Nanotech, said. “Well we have solved that emanate with record we are operative on.” Kavanaugh pronounced altering a structure of a graphene has allowed them to urge a supercapacitor’s appetite density, though didn’t elaborate serve since a record is “unique and proprietary.” He combined a obvious for a appurtenance is pending.

Graphenano company has developed a new battery, called Grabat, that could offer electric cars a driving range of up to 500 miles on a charge. The batteries can be charged to full in just a few minutes; it charges and discharges 33 times faster than lithium ion. The capacity of the 2.3V Grabat is huge with around 1000 Wh/kg which compares to lithium ion’s current 180 Wh/kg.


 Military Batteries

The advantages of Li-ion battery are reduced when used in high-temperature environments or is forced to generate large currents for extended periods of time.

Two nickel-based chemistries are also used in rechargeable batteries: nickel-metal-hydride (NiMH) and nickel-cadmium (NiCd). Both chemistries involve positive electrodes made of NiOOH (nickel oxyhydroxide) but differ in the materials used in their negative electrodes. NiMH has largely replaced NiCd because of its much greater specific energy and lower toxicity. NiMH batteries are competitive with Li-ion technology in some applications, and can match the lower end of the Li-ion battery spectrum in specific energy.

When compared to Li-ion technology in other respects, though, NiMH batteries have several disadvantages. For example, their high self-discharge rate keeps them from being stored for any length of time without needing to be recharged. Battery structures with lower self-discharge have been introduced, but generally have lower capacity than standard varieties.

Sulphur-based batteries

Lithium-sulphur is a closely watched technology that can also be used for military and aerospace applications. The batteries’ energy density is at least twice that of current lithium-ion batteries.
Oxis Energy, an Oxfordshire-based company that has a patent for lithium-sulphur batteries, says it has achieved a theoretical energy density five times greater than lithium-ion. It is working with Seat, the Spanish car brand owned by Volkswagen. Nasa, the US space agency, has invested in lithium-sulphur batteries for exploration missions.

For the technology to move from experiment to commercial product it will need to achieve longer life cycles. Mr Gonzalez at IDTechEx adds that start-ups must be able to produce the same-quality batteries in large volumes.

References and Resources also include:

US, China and Russia developing Small / floating Nuclear Reactors to power military forward operating bases, disputed islands and Arctic

U.S. Department of Defense (DOD) is increasingly interested in the potential of Small modular reactors (SMRs) defined as nuclear reactors generally 300MWe equivalent or less. DOD’s attention to small reactors stems mainly from two critical vulnerabilities it has identified in its infrastructure and operations: the dependence of U.S. military bases on the fragile civilian electrical grid, and the challenge of safely and reliably supplying energy to troops in forward operating locations.

SMRs have generated global interest, and potential future applications are a subject of international research directives.  Their are around 50 different SMR designs worldwide according to the IAEA.  Project proposals include use of SMRs for desalination, process heat generation, biofuel conversion and military base installations. Furthermore, SMR safety systems reduce threats to public health; decrease the global stockpile of weapons-grade material and radioactive waste; and provide critical infrastructure support on military installations worldwide.

The U.S. Army built the world’s first floating reactor, the SS Sturgis MH-1A, a 10-megawatt converted Liberty Ship, in 1967. It supplied power to the Panama Canal Zone from 1968 to 1975, before being defueled in 1977. Decades later, in 2010, Russia launched the 21,000-ton, 70 megawatt Akademik Lomonosov, which is expected to deploy in 2018 or 2019 to Vilyuchinsk, on the remote Kamchatka Peninsula.

Russia is in advanced stages of building the world’s first “floating” nuclear power plant (FNPP) for installation in remote areas and hopes FNPP technology will also interest South Asian countries like India. While the plant is already being tested, construction of the dock has begun on the Arctic coast in Russia’s Far East.  Any industrial project in the Artic would require tons of electric energy, and this is why Russia is also developing floating nuclear power plants. Russian company Rosenergoatom (part of Rosatom state-owned corporation) launched a project in 2006 to build floating NPPs in regions with limited energy capabilities.

China has said it will develop floating nuclear power plants on a priority basis in the South China Sea as it seeks to beef up electricity supply to the islands in the disputed maritime region.  The floating nuclear reactors could also power Chinese underwater mining operations, in which China has already invested heavily, and deepwater logistical bases for naval usage.

The new generation SMR is designed with modular technology using module factory fabrication, pursuing economies of series production and short construction times. While these designs are promising, they are untested and present new proliferation risks. There are many concerns about using small reactors for energy generation, but the unique needs of the military make their use for military purposes more likely.

China’s marine nuclear power platform to start by 2020 in S.China Sea

A shipbuilding firm in Central China’s Hubei Province has announced it is set to start construction on a marine nuclear power platform which is designed to supply power for the country’s offshore oil drilling platforms and islands. The technical design has been finalized, and the project is moving to the construction phase, local media the Hubei Daily reported. China National Nuclear Power (CNNP) is partnering with Chinese shipyards and electric machinery companies to develop a $150 million project. China has had some overseas success already with its Hualong reactor, with Pakistan currently building a plant using the technology.

The primary focus of China’s offshore nuclear platforms – reportedly to be commissioned before 2020 – will be for civil use on islands in waters such as the South China Sea, and as the technology matures, it could be applied to military nuclear vessels, Chinese analysts said.

The platforms have two modes – floating and submersible, and the  platforms will focus on solving power supply issues in the Xisha Islands and other islands in the South China Sea where infrastructure construction is underway, and urban agglomerations after that, Song Zhongping, a Beijing-based military expert and also a TV commentator told the Global Times.

China National Nuclear Corporation (CNNC) is set to launch a small modular reactor (SMR) dubbed the “Nimble Dragon” with a pilot plant on the island province of Hainan, according to company officials. CNNC designed the Linglong, or “Nimble Dragon” to complement its larger Hualong or “China Dragon” reactor and has been in discussions with Pakistan, Iran, Britain, Indonesia, Mongolia, Brazil, Egypt and Canada as potential partners. China National Nuclear Corporation (CNNC) has  said that China is expected to build 20 floating nuclear power stations in the future, which will significantly beef up the power and water supplies on the South China Sea islands, another official daily Global Times reported. Sun Qin, former chairman of the National Nuclear Corporation, said in March 2016 that the facility is scheduled to be put into operation in 2019.

China has said it will develop floating nuclear power plants on a priority basis in the South China Sea as it seeks to beef up electricity supply to the islands in the disputed maritime region. China will prioritise the development of a floating nuclear power platform in the coming five years, in an effort to provide stable power to offshore projects and promote ocean gas exploitation, Wang Yiren, vice director of the State Administration of Science, Technology and Industry for National Defence. Wang told Science and Technology Daily that Chinese authorities have already carried out research on relevant core technologies as well as the standardisation of maritime nuclear power plants. The floating nuclear reactors could also power Chinese underwater mining operations, in which China has already invested heavily, and deepwater logistical bases for naval usage.

“Floating power stations are less susceptible to natural disasters. In an emergency, the station could pump seawater into a boat to prevent core melting. Besides, the platform is small and can be dragged to a suitable place for maintenance,” reported in February, quoting an expert.

China General Nuclear, the company behind the new project, stresses the flexible nature of a ship-based nuclear reactor. “The 200 MWt (60 MWe) reactor has been developed for the supply of electricity, heat and desalination and could be used on islands or in coastal areas, or for offshore oil and gas exploration.” Other potential uses could be for new, large-scale industrial installations and flexible emergency power to regions in the event of natural disasters such as earthquakes or tsunamis

Zhang Jinlin, an academician at the Chinese Academy of Engineering and an expert at the CSIC 719 Research Institute, told that the platform is a typical civilian-military integration project, as its design fully takes civil demands into consideration, as well as tackling issues including safety, radiation protection and waste processing.

The nuclear reactor-related technology, when successfully reduced in size, could be later applied to the country’s military vessels, including nuclear-powered aircraft carriers or next generation nuclear submarines, Song said.


Russia building world’s first ‘floating’ nuclear power plant: Officials confirm construction ‘at closing stage

Russia’s ‘Project 20870’ involves placing two nuclear reactors on 140-meter long, 30-meter wide barges. The plan would use these nuclear barges’ 300MWt (thermal energy production) or 70 MWe (electrical energy production) to power remote cities and industrial sites throughout the Russian Arctic. The cost of the floating plant is estimated at around 30 billion rubles (US$480 million), according to Sergey Zavyalov, head of the plant construction.

The construction works on the dock, which will host the floating nuclear power plant ‘Akademik Lomonosov’, have started and completed by 2019. The severity of weather conditions (in winter, the temperature drops down to minus 60 degrees Celcius) obliging, the onshore facilities will be forced to endure ice impact and squalling winds.

The 21,000-tons unit will have two Russian-designed KLT-40S reactors, low-enriched uranium-fueled reactors used in some of Russia’s icebreakers, and two steam-driven turbines. One unit is able to provide enough electricity to power a city of 200,000 people. It can also produce 300 megawatt of heat that can be transferred onshore, equal to saving some 200,000 tons of coal every year.

The FPU is not self-propelled and must be towed to the location of operation. It is a barge consisting of three decks and 10 compartments. Apart from reactors, it is equipped with storage facilities for fresh and spent nuclear fuel, as well as liquid and solid nuclear waste. Experts have praised floating power plants for being secure from earthquakes and tsunamis, as well as from meltdown threats, as the reactor’s active zone is underwater.

“Reactor units are small and self-contained. They are nothing like those installed at the Chernobyl nuclear power station, of course. A scenario like that at the Fukushima power plant is also excluded,” Professor Georgy Tikhomirov of the Moscow Engineering Physics Institute recently told EFE news agency.“The advantage of the floating nuclear power plant is that it can be moored almost anywhere where there is a power line,” Tikhomirov said.

Akademik Lomonosov’ is to become the first of a proposed fleet of floating nuclear power plants that can provide heat and energy to the country’s remote regions, and assist in natural resource extraction. Russia also plans to lease the plants to other countries, where they will be used for electricity production and water desalination, as the facility could be converted into a desalination plant with production capacity of some 240,000 cubic meters of fresh water per day.

Pavel Ipatov, Deputy CEO (Special Projects) in Russia’s state atomic energy corporation Rosatom, told IANS in an e-mail interview from Moscow that an FNPP is basically a mobile, low-capacity reactor unit operable in remote areas isolated from the main power distribution system, or in places hard to access by land.

“FNPPs are designed to maintain both uninterruptible power and plentiful desalinated water supply in remote areas,” he said. The Russian explained that floating units are components constructed for transport by sea or river to areas that are otherwise inaccessible or difficult to reach by land.

“The plant is constructed as a non-self propelled vessel to be towed by sea or river to the operation site. Its mobility will make it possible to relocate it from one site to another, if necessary,” he said. “The first floating NPP is to operate in Russia’s extreme northeastern region of Chukotka, where there is plenty of oil and gas exploration, gold mining and other mineral resource enterprises,” he added.

The FNPP has an electric capacity of 70 MW and is equipped with two reactors of 150 MW thermal capacity each. “A vessel like that can provide electric supply to a city of 200,000 and heat supply to a million-plus city,” Ipatov said. An FNPP’s operational life span ranges from 35 to 40 years.

In line with conventional onshore nuclear plants that are often equipped with desalination units for freshwater, the FNPP will have a desalination unit producing up to 240 cubic metres of water per hour. Besides, as regards safety, the Russian said that FNPPs would be governed by the same advanced safety parameters put in place after the Fukushima disaster in Japan in 2011.

“We see significant potential in Southeast Asia and other regions of the world. Memorandums of cooperation on floating nuclear power plants projects have been signed with China and Indonesia,” he said.

Iceberg Design Bureau won the tender of the Ministry of Industry and Trade for project development of a multipurpose nuclear maintenance ship. The ship is needed for servicing of Project 22220 icebreakers and the floating nuclear power plant Akademik Lomonosov. Baltiysky Zavod shipyard keeps on building of Project 22220 nuclear-powered icebreakers Arktika, Sibir and Ural. The ships are expected to join Atomflot in 2019, 2021 and 2022 respectively.



US Navy eyes small modular reactors for its Bases

Navy had better success with developing nuclear power for its aircraft carriers and submarines. But these have quite different requirements from today’s SMR proposals. A submarine reactor is designed  to operate under stressful conditions—to provide a burst of power when the vessel is accelerating, for example. And unlike civilian power plants, naval nuclear reactors don’t have to compete economically with other sources of power production. Their overwhelming advantage is that they enable a submarine to remain at sea for long periods of time without refueling.

Navy secretary Ray Mabus says there’s another alternative his department’s hasn’t explored yet: nuclear, and its time may have come. While nearly a fifth of the Navy’s ships run on nuclear power, the only land-based nuclear reactors the service operates are for training purposes. But Mabus said he wants to explore the concept of installing small, modular nuclear reactors on bases to continue their push toward independence from off-base energy.

Rather than the large, utility-scale nuclear plants currently in use by civilian power companies, Mabus said he envisions a system of small, “distributed” nuclear generators networked together via a microgrid on a given base.

“With some of the new technology that’s coming along, it’s much safer, it produces far less residue and nuclear waste, and it is an option that I think we should explore,” he said at the Council on Foreign Relations in New York. “They are safer than traditional nuclear plants because of automated safety features and containment systems that are entirely underground, and cheaper because they can be fabricated in factories and quickly assembled at the sites where they’ll be used.”


Small Nuclear Reactors

The major disadvantage of nuclear power compared with other types of electricity generation is that nuclear power is expensive. According to a 2014 report by the Wall Street advisory firm Lazard, the cost of generating a megawatt-hour of electricity from a new nuclear reactor (without considering government subsidies, including those for liability for severe accidents) is between US $92 and $132. Compare that with $61 to $87 for a natural-gas combined-cycle plant, $37 to $81 for wind turbines, and $72 to $86 for utility-scale solar. Nuclear’s high costs result directly from the very high costs of building a reactor—estimated by Lazard at $5.4 million to $8.3 million for each.

The Army’s small nuclear reactors generated power for remote installations in Greenland, Antarctica, Alaska, and other locations. This program ended in 1979 due to a number of factors, including the accident at Three Mile Island, cheap fossil fuel prices, and an overall waning of national interest in nuclear power.  As Suid writes, the Army concluded “that the development of complex, compact nuclear plants of advanced design was expensive and time consuming…that the costs of developing and producing such plants are in fact so high that they can be justified only if the reactor has a unique capability and fills a clearly defined objective backed by the Department of Defense…[and that] the Army and the Pentagon had to be prepared to furnish financial support commensurate with the AEC’s [U.S. Atomic Energy Commission’s] development effort on the nuclear side.”

SMRs provide a number of benefits compared to the commercialized light water reactors, or LWRs, some of which are of particular interest to the Department of Defense. SMR designs for military base applications, such as the FliBe Energy’s Liquid Fluoride Thorium Reactor, provide a mobile and reliable avenue for on-site electrical power generation and desalination.

Storms, blown transformers or sabotage can disable power grids, which is of concern to military installations connected to them. In isolated areas or military installations, the loss of power to site infrastructure can result in significant financial loss or loss of life.

Employing SMR technology on military bases will also allow for access to clean water, which is a largely unavailable resource across the globe. The U.S. Navy nuclear-powered aircraft carriers desalinate an estimated 400,000 gallons per day

SMRs use technology that establishes dynamic safety; enhances nuclear waste management protocols that benefit nonproliferation; and generates on-site electricity and potable water on military installations.



One major advantage of SMRs is their implementation of advanced safety features. SMRs employ passive safety systems that allow natural coolant circulation pathways to control reactor conditions. Passive safety requires that indefinite self-cooling and safe shutdown is possible without operator input, electrical power and additional coolant input.

SMRs are also significantly more compact than commercial LWRs. This reduces overall complexity and reduces potential modes of reactor control system failure. The Toshiba Super-Safe, Small and Simple and the Lawrence Livermore National Laboratory Small Secure Transportable Autonomous Reactor utilize a tamper-proof system that includes remote shutdown, sealed reactor core and autonomous operation.  These safety features minimize on-site personnel and allow for global SMR usage because they assist in securing the reactor core against violent non-state actors and terrorist groups seeking to gain access to nuclear material.


Nonproliferation and Waste Management

The generation of radioactive waste, such as Uranium-238 and Plutonium-239, occurs over the course of a commercial LWR fuel cycle. The production and storage of these materials is a threat to public health and inhibits nuclear armistice. Pu-239 is the most common material used in nuclear weaponry. The reduction of Pu-239 stockpiles aids the movement for nonproliferation of the global nuclear arsenal by decreasing the amount of material available for weapons production.

SMR designs powered by spent nuclear fuel are an international research focus. Developments based in the U.S. include SMR models such as the General Atomics Energy Multiplier Module; the X-energy 100; and the General Electric Hitachi Power Reactor Innovative Small Module.

The World Nuclear Association lists the features of an SMR, including:

  • Small power and compact architecture and usually (at least for nuclear steam supply system and associated safety systems) employment of passive concepts. Therefore there is less reliance on active safety systems and additional pumps, as well as AC power for accident mitigation.
  • The compact architecture enables modularity of fabrication (in-factory), which can also facilitate implementation of higher quality standards.
  • Lower power leading to reduction of the source term as well as smaller radioactive inventory in a reactor (smaller reactors).
  • Potential for sub-grade (underground or underwater) location of the reactor unit providing more protection from natural (e.g. seismic or tsunami according to the location) or man-made (e.g. aircraft impact) hazards.
  • The modular design and small size lends itself to having multiple units on the same site.
  • Lower requirement for access to cooling water – therefore suitable for remote regions and for speci􀂡c applications such as mining or desalination.
  • Ability to remove reactor module or in-situ decommissioning at the end of the lifetime.



While the benefits of small nuclear reactors are promising, there remain many unsolved disadvantages. From a technical standpoint, new small-scale reactor designs are immature. The current fleet of large-scale light water reactors has demonstrated decades of successful operation at very high standards, and new small reactor designs will need to undergo rigorous testing to prove their worth. New control and safety systems, non-traditional components, and unconventional fuel and cooling materials are examples of design features that will take many years to develop to commercial viability.

Other challenges to small nuclear reactors are non-technical. First, too many competing designs in the market create confusion and delay the ability to achieve the standardization that will be necessary for widespread adoption. Greenpeace has voiced concern about what effect massive storms could have on ocean-based nuclear reactors.

Others worry ocean-based reactors would be more susceptible to terrorist attacks and increased proliferation risks. Operating nuclear reactors in an Arctic climate is complicated to say the least, and another concern is that barges like those used in the Russian project might not be as protected from earthquake or tsunamis as reactors further out at sea.


References and Resources also include:

China is on the way to displace US as global leader In Renewable Energy enabling strategic military advantage

US president Trump’s budget contain significant cuts in government spending on clean energy development, while he pursues policies to bring back coal. The administration’s 2018 budget proposes to slash funding for the Office of Energy Efficiency and Renewable Energy by a stunning 71.9%. “We are unified that cuts of this magnitude…will do serious harm to this office’s critical work and America’s energy future,” the former officials wrote in a letter to members of Congress.

Investments made by that office are critical to “creating good-paying jobs, cutting pollution and ensuring American global competitiveness,” the letter said. Solar employment expanded last year 17 times faster than the total U.S. jobs market, according to the Solar Foundation. Overall, taxpayer investment of $12 billion in the DOE’s renewable division has generated an estimated net economic benefit to the U.S. of more than $230 billion, according to its website.

Meanwhile China has put almost $88 billion into renewables in 2016—one-third more than the U.S. pointing to its new role as the world leader in renewable energy investment. China has vaulted to the top of the world in solar power capacity in 2016, passing Germany, which had been the long-standing leader. The country added more than 34 gigawatts of solar capacity last year—nearly 1.5 times the amount the U.S. has installed in its entire history. China also installed more than 23 gigawatts of wind power in 2016, almost three times as much as the U.S. added that year.

Earlier, President Donald Trump announced withdrawal of the United States from the Paris climate accord. In a speech from the White House Rose Garden, Trump made a largely economic case for withdrawing from the agreement, arguing the nonbinding accord was unfair to American workers and U.S. competitiveness.

China said its CO2 emissions in 2017 will drop 1 percent from 2016, making it the fourth consecutive year of either zero growth or a decline in the country’s emissions. The forecast by China’s National Energy Administration is encouraging news in the effort to slow climate change.  China is on track to meet its pledge to get 15 percent of its energy from clean energy sources including renewables, nuclear and hydropower, and to reduce the energy intensity of its economy by 40 to 45 percent from 2005 levels by 2020.

“As Trump’s rhetoric leaves the world in doubt over what his plan is to tackle climate change, China is being thrust into a leadership role,” Li Shuo, a global policy advisor for Greenpeace, said in a statement.

China has announced that it will invest $361 billion in renewable energy by 2020. The investment will create over 13 million jobs in the sector, the National Energy Administration (NEA) said in a blueprint document that lays out its plan to develop the nation’s energy sector during the five-year 2016 to 2020 period. The NEA said installed renewable power capacity including wind, hydro, solar, and nuclear power will contribute to about half of new electricity generation by 2020.

The Energy Information Administration (EIA), the statistical arm of the U.S. Department of Energy, in its International Energy Outlook 2016 estimates China’s oil imports in 2015 amounted to about 6.6 million barrels per day (b/d), representing 59 percent of the country’s total oil consumption. By 2035, the EIA projects China’s oil imports will rise to about 9.7 million b/d, accounting for about 62 percent of total oil consumption.

China’s reliance on imported natural gas is also significant. According to the EIA, China’s natural gas imports, which amounted to 1.4 trillion cubic feet (Tcf) in 2015 (about 24 percent of consumption), are expected to rise to 6 Tcf (about 26 percent of consumption) in 2035. The EIA forecast on China’s energy imports implies a rather modest annual growth rate of about 2 percent for oil imports and a more robust 7.5 percent annual growth rate for gas imports.

China majority of  oil and gas imports is over sea lines of communication (SLOCs) and through maritime choke points are controlled by U.S. navy and are susceptible to naval blockade.

China is  developing alternate land routes to bypass current maritime routes.  Earlier in May Last year, China had entered into a historic contract with Russia, an estimated $400 billion gas deal to supply 38 billion cubic meters of gas annually over three decades starting in 2018. Per EIA’s base case projection, in 2035 Russia could satisfy about 85 percent of China’s oil import requirements (8.1 of 9.7 million b/d) and all of China’s needs for natural gas imports (6 Tcf).  China shares a 4,179 kilometers (km) land border with Russia, so pipelines connecting Russian oil and gas fields to northeastern China would be secure and energy flows could not be effectively shut down by the United States.

China’s thrust in renewable energy shall also reduce the vulnerability of its oil and gas imports over sea lines of communication (SLOCs) and through maritime choke points. Advanced energy systems will temper rising global demand for oil, impacting global diplomacy and influence, with direct national security implications for the U.S., says CNA report.

The United States must lead in the global transition to clean energy or risk losing influence in South Asia and Africa, a coalition of retired U.S. generals and admirals said in a report.

Russia and Iran, two countries not always friendly to Washington, are positioning themselves to meet burgeoning oil and natural gas demand in India and China. For example, a nearly $13 billion agreement giving Russian state oil firm Rosneft and its partners a 98 percent share of India’s Essar oil company is expected to close this month.

Meanwhile, China and countries in Europe are leading the way in investing in clean energy in Africa and India, where energy demand is expected to grow strongly for decades.


The Global lead in Renewable technology shall enable strategic  military advantage

Energy is also vital to Defence for war fighting capabilities, such as increased range, better endurance, longer time on station, and reduced requirements for resupply.

“Installations at home and abroad are increasingly dependent on energy for real-time command and control, remote operations of unmanned air and ground units, and intelligence analysis. In addition, the Defense Department is developing a new strategy—“The Third Offset Strategy”—that places specific focus on next-generation technologies, platforms, and weapons systems to sustain our competitive advantage. These new systems, such as rail guns and directed energy weapons (lasers), will be more dependent on reliable high-capacity electrical systems that will require advanced energy components. Secure power is essential now, and will be even more so in the future,” says CNA report.

Improved energy performance also can reduce the risk and effects of attacks on supply lines and enable tactical and operational superiority. One in nearly 40 fuel convoys in Iraq in 2007 resulted in a death or serious injury, according to a study commissioned by the Defense Department. In Afghanistan the same year, one in 24 fuel convoys suffered casualties.

Advanced energy systems can lower vulnerable logistical requirements, extending missions by reducing the need for fuel resupply, and lowering the number of combat forces needed to protect fuel supplies for our warfighters in forward operations and installations. DoD should explore alternate and renewable energy sources that are reliable, cost effective, and can relieve the dependence of deployed forces on vulnerable fuel supply chains to better enable our primary mission to win in conflict. The purpose of such efforts should be to increase the readiness and reach of our forces.” said James Mattis, U.S. Secretary of Defense.

The DOD has made advanced energy sources for installations a priority. This is being driven “to ensure the energy resilience and reliability of a large percentage of the energy it manages, reduce the amount of budget allocated to this energy, and treat installation energy as a force multiplier in the support of military readiness.”  To realize this objective, the DOD has set a goal to procure at least 25 percent of total facility energy from renewable energy sources, while installing 3 gigawatts of renewable energy directly on its installations, by FY 2025

The lack of emphasis on renewable may also impact US  Department of Defense (DoD) that  had embarked upon an ambitious program of expanded renewable energy generation on bases and in the field, with a goal of producing 25% of its energy from renewable sources by 2025. The armed forces nearly doubled renewable power generation between 2011 and 2015, to 10,534 billion British thermal units, or enough to power about 286,000 average U.S. homes, according to a Department of Defense report. The number of military renewable energy projects nearly tripled to 1,390 between 2011 and 2015, department data showed, with a number of utilities and solar companies benefiting.


Zhao Keshi, a member of the CMC and director of the Logistics Department of the CMC, asserted that President Xi Jinping conceives of energy construction as an integral part of the national security plan to include expansion and construction of more renewable energy resources.

Additionally, Zhao identified two important and ongoing trends in his remarks: the revolution in national energy and the full integration of civilian and military (civ-mil) development that will enhance the Chinese “wartime ability to fight.” From Zhao’s comments, it would appear that China is securitizing renewable energy, as part of a broader energy strategy (能源戰略).

Chinese leader Xi has repeatedly stressed the importance of “military-civilian integration” as a core component of the country’s military development strategy. China’s leaders believe this integration will help China continue its rapid defense modernization without creating too great a drag on its economy. “Through in-depth development of military-civilian integration, military technologies are gradually applied in civilian fields, making high-tech equipment available to commercial markets. At the same time, we have also emphasized the importance of encouraging more civilian product suppliers to actively participate in the defense-building process,” said Dai Hao, Director-General of China’s Institute of Command and Control.


According to The CNA Military Advisory Board, a Virginia-based think-tank,The US is falling behind other countries in advanced energy technologies, threatening national security and undermining its global influence. The CNA Military Advisory Board, a Virginia-based think-tank, argues that the US should “take a leadership role in the transition to advanced energy” by stepping up research and development of technologies such as renewables, nuclear power, energy efficiency and electricity storage.




References and Resources also include:

Thermoelectric Generators to generate power for future homes, vehicles, consumer and Military equipment

A thermoelectric (TE) device can directly convert heat emanating from the Sun, radioisotopes, automobiles, industrial sectors, or even the human body to electricity. Many electrical and mechanical devices, such as car engines, produce heat as a byproduct of their normal operation. It’s called “waste heat,” and its existence is required by the fundamental laws of thermodynamics.

“Over half of the energy we use is wasted and enters the atmosphere as heat,” said Boona, a postdoctoral researcher at Ohio State. “Solid-state thermoelectrics can help us recover some of that energy. These devices have no moving parts, don’t wear out, are robust and require no maintenance. Unfortunately, to date, they are also too expensive and not quite efficient enough to warrant widespread use. We’re working to change that.”

Today, thermoelectric generators allow lost thermal energy to be recovered, energy to be produced in extreme environments, electric power to be generated in remote areas and microsensors to be powered. Direct solar thermal energy can also be used to produce electricity, writes Daniel Champie. Thermoelectric generators are expected to fulfil significant market needs including  individual cars, transportation trucks and distant sensors in energy intensive industries, e.g. metal or glass production.

Thermo-electric generators allow waste heat to be recovered and used productively to improve fuel economy and reduce CO2 emissions. According to US Military, the reductions in the Department’s need for energy can improve warfighting capabilities, such as increased range, better endurance, longer time on station, and reduced requirements for resupply. Improved energy performance also can reduce the risk and effects of attacks on supply lines and enable tactical and operational superiority.

Military is  interested in thermoelectrics for Energy transfer, energy harvesting, thermal management, and refrigeration. DARPA’s Materials for Transduction (MATRIX) program is seeking new materials for energy transduction ( conversion of energy from one form into another) such as communications antennas (radio waves to electrical signals), thermoelectric generators (heat to electricity) and electric motors (electromagnetic to kinetic energy) that would result in new capabilities or significant size, weight, and power (SWAP) reduction for military devices and systems.

Although a number of materials with thermoelectric properties have been discovered, most produce too little power for practical applications.  In spite of increased research and development,  the thermoelectric power-generating efficiency  has been relatively small, with efficiencies of not much more than 10 percent by the late 1980s. Researchers are developing better thermoelectric materials  in order to go much beyond this performance level.

Thermoelectric generators

Thermoelectric  generators  utilize thermoelectric effect  like Seebeck effect, Peltier effect and the Thomson effect  for energy conversion, in which an electric current is produced at the junction between two wires of different materials if they are at different temperatures. The voltage produced by TEGs or Seebeck generators is proportional to the temperature distance across between the two metal junctions.

TEGs  are  made of pairs of p-type and n-type elements. The p-type elements are made of semiconductor materials doped such that the charge carriers are positive (holes) and Seebeck coefficient is positive. The n-type elements are made of semiconductor material doped such that the charge carriers are negative (electrons) and the Seebeck coefficient is negative.

Most current thermoelectric materials are based on rare or toxic elements, including cadmium-, telluride- or mercury-based materials, which preclude their implementation at large scale. More sustainable materials have been extensively investigated over the years, but mostly at laboratory scale. Furthermore, they failed so far to achieve sufficient performance levels to justify heavy industrial investments towards full scale production and market introduction.


Radioisotope Thermoelectric generators for Deep Space Missions

Radioisotope power systems are generators that produce electricity from the natural decay of plutonium-238, which is a non-weapons-grade form of that radioisotope used in power systems for NASA spacecraft. Heat given off by the natural decay of this isotope is converted into electricity, providing constant power during all seasons and through the day and night. RTG can generate hundreds of watts to power multiple spacefaring instruments.

Radioisotope thermoelectric generator or RTG have been used to power many deep space missions from the Cassini orbiter around Saturn, the New Horizon probe to the outer Solar System, the Curiosity rover on Mars and the veteran Voyager probes. Because an RTG has no moving parts and doesn’t require regular maintenance, it is well suited for powering gadgets that can’t be attended to for long durations.

They offer the key advantage of operating continuously, independent of sunlight, for a long time. They have little or no sensitivity to cold, radiation or other effects of the space environment. Radioisotope electrical power and heating systems enable science missions that require greater longevity, more diverse landing locations or more power or heat than missions limited to solar power systems, says NASA.


Thermoelectric generators for Military

In 2014, GMZ Energy successfully demonstrated a 1,000W TEG designed for diesel engine exhaust heat recapture. With the effort involved in transporting fuel to a battle site, diesel can cost the U.S. military upwards of $10.50 per liter ($40 per gallon). So using that fuel more efficiently will save the Department of Defense significant amounts of money, says Scott Rackey, GMZ’s vice president of business development.

Cheryl A. Diuguid, CEO of GMZ, said: “With the successful demonstration of GMZ’s 1,000W TEG solution, we are excited to move to the next phase of this program and begin testing in a Bradley Fighting Vehicle. In addition to saving money and adding silent-power functionality for the U.S. Military, this TEG can increase fuel efficiency for most gasoline and diesel engines. We look forward to implementing our low-cost TEG technology into a broad array of commercial markets, including long-haul trucking, heavy equipment, and light automotive.”

“GMZ’s patented half-Heusler material is uniquely well suited for military applications. The 1000W TEG features enhanced mechanical integrity and high-temperature stability thanks to GMZ’s patented nanostructuring approach. GMZ’s TEG also enables silent generation, muffles engine noise, and reduces thermal structure,” claims GMZ.


Thermoelectric materials for TEG

The conversion of heat to electricity by thermoelectric devices may play a key role in the future for energy production and utilization. However, in order to meet that role, more efficient thermoelectric materials are needed that are suitable for high-temperature application.

New approach boosts performance in thermoelectric materials

A team of researchers – from universities across the United States and China, as well as Oak Ridge National Laboratory – is reporting a new mechanism to boost performance through higher carrier mobility, increasing how quickly charge-carrying electrons can move across the material. The work, reported this week in the Proceedings of the National Academy of Science, focused on a recently discovered n-type magnesium-antimony material with a relatively high thermoelectric figure of merit, but lead author Zhifeng Ren said the concept could also apply to other materials.

“When you improve mobility, you improve electron transport and overall performance,” said Ren, M.D. Anderson Chair professor of physics at the University of Houston and principal investigator at the Texas Center for Superconductivity at UH.

The material’s power factor can be boosted by increasing carrier mobility, the researchers said. “Here we report a substantial enhancement in carrier mobility by tuning the carrier scattering mechanism in n-type Mg3Sb2-based materials,” they wrote. “… Our results clearly demonstrate that the strategy of tuning the carrier scattering mechanism is quite effective for improving the mobility and should also be applicable to other material systems.

Composite material yields 10 times—or higher—voltage output

In Nature Communications, engineers from The Ohio State University describe how they used magnetism on a composite of nickel and platinum to amplify the voltage output 10 times or more—not in a thin film, as they had done previously, but in a thicker piece of material that more closely resembles components for future electronic devices.

Instead of applying a thin film of platinum on top of a magnetic material as they might have done before, the researchers distributed a very small amount of platinum nanoparticles randomly throughout a magnetic material—in this case, nickel. The resulting composite produced enhanced voltage output due to the spin Seebeck effect. This means that for a given amount of heat, the composite material generated more electrical power than either material could on its own. Since the entire piece of composite is electrically conducting, other electrical components can draw the voltage from it with increased efficiency compared to a film.

While the composite is not yet part of a real-world device, Heremans is confident the proof-of-principle established by this study will inspire further research that may lead to applications for common waste heat generators, including car and jet engines. The idea is very general, he added, and can be applied to a variety of material combinations, enabling entirely new approaches that don’t require expensive metals like platinum or delicate processing procedures like thin-film growth.

Efficient, inexpensive and bio-friendly thermoelectric material

Now the team, led by University of Utah materials science and engineering professor Ashutosh Tiwari, has found that a combination of the chemical elements calcium, cobalt and terbium can create an efficient, inexpensive and bio-friendly material that can generate electricity through a thermoelectric process involving heat and cold air. The material needs less than a one-degree difference in temperature to produce a detectable voltage.  “There are no toxic chemicals involved,” he says. “It’s very efficient and can be used for a lot of day-to-day applications.”

The applications for this new material are endless, Tiwari says. It could be built into jewelry that uses body heat to power implantable medical devices such as blood-glucose monitors or heart monitors. It could be used to charge mobile devices through cooking pans, or in cars where it draws from the heat of the engine. Airplanes could generate extra power by using heat from within the cabin versus the cold air outside. Power plants also could use the material to produce more electricity from the escaped heat the plant generates.


Thermoelectric generators (TEGs) based on polymers

Shannon Yee, an assistant professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering is  pioneering the use of polymers in thermoelectric generators (TEGs). TEGs are typically made from inorganic semiconductors. Yet polymers are attractive materials due to their flexibility and low thermal conductivity. These qualities enable clever designs for high-performance devices that can operate without active cooling, which would dramatically reduce production costs.

The researchers have developed P- and N-type semiconducting polymers with high performing ZT values (an efficiency metric for thermoelectric materials). “We’d like to get to ZT values of 0.5, and we’re currently around 0.1, so we’re not far off,” Yee said.

In one project funded by the Air Force Office of Scientific Research, the team has developed a radial TEG that can be wrapped around any hot water pipe to generate electricity from waste heat. Such generators could be used to power light sources or wireless sensor networks that monitor environmental or physical conditions, including temperature and air quality.

“Thermoelectrics are still limited to niche applications, but they could displace batteries in some situations,” Yee said. “And the great thing about polymers, we can literally paint or spray material that will generate electricity.”

This opens opportunities in wearable devices, including clothing or jewelry that could act as a personal thermostat and send a hot or cold pulse to your body. Granted, this can be done now with inorganic thermoelectrics, but this technology results in bulky ceramic shapes, Yee said. “Plastics and polymers would enable more comfortable, stylish options.”

Although not suitable for grid-scale application, such devices could provide significant savings, he added.


Carbon Nanotubes Boost Thermoelectric Performance

In a report published in October, scientists from the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) used single-walled carbon nanotubes (SWNCTs) to advance the thermoelectric performance of organic semiconductors. The carbon nanotube thin films, they said, could ultimately be integrated into fabrics to convert waste heat into electricity or serve as a small power source.

In organic thermoelectric materials, carbon nanotubes are often an electrically conductive “filler” – one part of a polymer-based composite. The NREL researchers believe that carbon nanotubes could be a thermoelectric material in their own right, and a primary material for efficient thermoelectric generators.

The NREL researchers demonstrated that the same SWCNT thin films achieve equivalent thermoelectric performance when doped with either positive or negative charge carriers – an important finding, says Ferguson. The identical performance, he said, suggests that carbon nanotube networks have the potential to be used for both the p-type and n-type legs in a thermoelectric device. P-type and n-type legs can be made from the same SWCNT material, inherently balancing the electrical current in each and simplifying device manufacturing.

“That opens up the possibility of fabricating a device that is essentially a single semiconductor material, and then creating p- and n-type regions in that semiconductor,” said Ferguson. The same cannot be said of almost all inorganic semiconductor materials, said the senior scientist, which are typically n-type or p-type, but rarely both.

According to the team’s report, NREL’s combination of ink chemistry, solid-state polymer removal, and charge-transfer doping strategies enable n-type and p-type TE power factors, in the range of 700 μW m−1 K−2 at 298 K, for the thin films containing 100% s-SWCNTs.

“Our results indicate that the TE performance of s-SWCNT-only material systems is approaching that of traditional inorganic semiconductors, paving the way for these materials to be used as the primary components for efficient, all-organic TE generators,” said the authors in their Energy & Environmental Science abstract.


Graphene  an Ultra-Efficient Thermionic Generator

Thermionic energy converters (TEC) traditionally used bimetallic junctions to convert heat into electricity, now Researchers at Stanford University have built a new prototype that uses graphene in the place of metal to make it nearly seven times more efficient than the original.

“TEC technology is very exciting. With improvement in the efficiency, we expect to see an enormous market for it,” said Stanford researcher and lead author of the paper, Hongyuan Yuan. “TECs could not only help make power stations more efficient, and therefore have a lower environmental impact, but they could be also applied in distributed systems like solar cells. In the future, we envisage it being possible to generate 1-2 kilowatts of electricity from water boilers, which could partially power your house.”

Stanford’s TEC prototype uses two electrodes, the emitter and collector, which are separated by a small vacuum gap. The researchers tested their prototype using a single sheet of graphene in place of tungsten as a collector material. Their results revealed that the new carbon-based collector material improved the efficiency by 6.7 times when converting heat into electricity at 1,000° C (1,832° F).

The technology is still not ready to be applied to practical uses such as powering homes, as it still works only in a vacuum chamber. But researchers are working on a vacuum packaged TEC that will allow them to test the reliability and efficiency of the generator in real-world situations, as reported by Colin Payne.


Market growth

The market for thermoelectric energy harvesters will reach over $1.1 billion by 2026, according to report by  idtech. A large number of car companies, including Volkswagen, VOLVO, FORD and BMW in collaboration with NASA have been developing thermoelectric waste heat recovery systems in-house, each achieving different types of performance but all of them expecting to lead to improvements of 3-5% in fuel economy while the power generated out of these devices could potentially reach up to 1200W.

Wireless sensors powered by thermogenerators in environments where temperature differentials exist would lead to avoiding issues with battery lifetime and reliability. It could be related to saving energy when cooking by utilising thermo-powered cooking sensors, powering mobile phones, watches or other consumer electronics, even body sensing could become more widespread with sensory wristbands, clothing or athletic apparel that monitor vitals such as heart rate, body temperature, etc.



References and Resources also include:

Military Wireless power transfer (WPT) technology to alleviate the battlefield battery burden for soldiers, manned and unmanned vehicles

Wireless power transfer (WPT) or wireless energy transmission is the transmission of electrical energy from a power source to a consuming device, without the use of discrete man-made conductors. WPT use wireless transmitter that uses any of time-varying electric, magnetic, or electromagnetic fields to convey energy to one or wore receivers, where it is converted back to an electrical current and then used.


Wireless power techniques fall into two categories, non-radiative and radiative. In non-radiative techniques, power is typically transferred by magnetic fields using inductive coupling between coils of wire. A current focus is to develop wireless systems to charge mobile and handheld computing devices such as cellphones, digital music players and portable computers without being tethered to a wall plug. Power may also be transferred by electric fields using capacitive coupling between metal electrodes. In radiative far-field techniques, also called power beaming, power is transferred by beams of electromagnetic radiation, like microwaves or laser beams.


WPT has the unique potential to transform war fighting of the future and alleviate the battlefield battery burden for both soldiers and manned and unmanned vehicles on land, air, and undersea. QinetiQ’s Talon robots that were deployed in Afghanistan automatically recharged their batteries when they were docked to an armored vehicle.


Other recent examples are charging of Soldier’s central battery from vehicle seat back as they sit in vehicles, charging of handheld devices through vests, powering helmet-mounted devices through Soldier vest-to-helmet WPT, Soldier helmet-to-goggle WPT to power devices such as night vision, radio devices and defog optics.


In the future advancements in wireless energy transfer will enable distribution of power amongst power sources, multimodal energy harvesters, and loads to occur wirelessly on the Soldier as a platform, so that all carried equipment will be powered and ready for operation at all times without thought to replacing individual equipment power sources.


As a long term goal, the Army is looking to supply troops remotely using wireless systems that could transfer power from a drone to solar panels or other devices that soldiers could plug into on the battlefield, officials said.

US Navy developing undersea wireless technology to recharge UUVs

U.S. Navy started making a big push into unmanned underwater vehicles for a variety of missions, including reconnaissance, mine hunting, ocean floor mapping, and anti-submarine warfare. The future trend is to use electric propulsion as it is more efficient and stealthy. However one of the challenges is of charging them, they need to return to base just so a human can plug it into a charging station.


Electrically powered, they are quiet and can travel great distances from their mother ship. The problem: It’s hard for a floating robot to plug itself into a charging station at sea. To overcome this challenge, the US Navy is now developing methods to recharge underwater unmanned vehicles (UUVs) with the support of undersea wireless technology, in a bid to reduce time between missions and enhance overall utility. The underwater energy transfer programme was performed using data that is transferred wirelessly underwater using underwater optical communications system.



“This type of technology is going to widen the array of missions the Navy can use UUVs for. Having a UUV that can travel long distances gathering intel from ports and areas of the world our surface ships and underwater craft typically can’t go is going to increase the effectiveness of them,” said Dr. Graham Sanborn, Engineer at SSC Pacific. “It’s also going to make missions safer, because service members will no longer need to accompany the machine.”


“Underwater data and energy transfer are expected to multiply the effectiveness of Navy-operated UUVs and other unmanned platforms by providing a vehicle-agnostic method for autonomous underwater energy charging,” said Alex Askari, Naval Surface Warfare Center, Carderock Division (NSWCCD) technical lead.


Carderock Division’s developed technology enables power transmission between underwater systems, such as UUVs. During the main demonstration, the team was successful in transferring power wirelessly from an underwater docking station to a MARV UUV section, and ultimately to the UUV’s battery, which was charged at 2 kilowatts while submerged. The Mid-sized Autonomous Research Vehicle (MARV) UUV is 16.5 feet long and just slightly more than one foot in diameter.


WiTricity engineers have also demonstrated the ability to wirelessly transfer several hundred watts of power through seawater. WiTricity envisions UUVs being recharged simply by floating alongside a dock, larger vessel, or other power source, eliminating the need for tight mechanical coupling and allowing power to be transferred underwater safely, reliably, and efficiently.


A primary obstacle is the difference in conductivity between air and seawater. For example, the technology being developed by the team at SSC Pacific must take into account the fact that seawater starts becoming less conductive at a frequency of 20kHz, according to research published by the Naval Surface Warfare Center Carderock Division.


An additional component of SSC Pacific’s research is developing chargers that are standardized across multiple UUVs. “Currently if the Navy buys one underwater vehicle and some sort of charger it will only work with that brand or that particular type,” explained Dr. Alex Phipps, chief of the advanced integrated circuit technology branch at SSC Pacific. “What we aim to do is capture the common elements that could be reused for multiple vehicles and create a standard that we can give to industry so that anybody that wants to sell a vehicle and work with the Navy can conform to that standard and there’s interoperability across the fleet.”



DHPC Receives Army SBIR Award for Laser-Based Wireless Power Transfer


On May 10, 2017, DHPC has been awarded the ARMY SBIR Phase II contract titled “Laser-Based Wireless Power Transfer”.


The objective of this SBIR Phase II is to develop an advanced and safe laser-based wireless power transmission (LBWPT) system capable of beaming laser power over up to several kilometers to a moving platform from ground, fixed or mobile platforms, or from a moving platform to ground. The developed system will be highly reliable and affordable, with additional emphasis on high efficiency and low SWaPC.


Successful development of the LBWPT system will be able to significantly extend the operational availability of a variety of autonomously operating devices and sensors for military and commercial applications, including robotic systems, unmanned aerial systems (UASs), surveillance systems, devices for remote chemical and biological detection, as well as free-space communications systems.


Our innovative development will also reduce warfighters exposure to enemy forces, as well as reduce the risk of uncovering the locations of emplaced powered devices and sensors. LBWPT development may also reduce the on-board battery weight of electrically powered systems, resulting in increased useful payload.


In addition, the LBWPT system will provide a unique capability of remotely supplying power to contaminated sites or inaccessible areas that pose a significant personnel threat, such as nuclear power stations and chemical plants destroyed by natural disasters, such as earthquakes or tsunami, or by actions of terrorists or enemy forces.


DHPC will build a prototype system capable of establishing and maintaining power transfer to a moving receiver. The system’s overall DC source to DC conversion efficiency is expected to reach 20% or better.



References and resources also include: