Friday, October 7, 2016

Fall cleaning: Optimizing V8 memory consumption

Memory consumption is an important dimension in the JavaScript virtual machine performance trade-off space. Over the last few months the V8 team analyzed and significantly reduced the memory footprint of several websites that were identified as representative of modern web development patterns. In this blog post we present the workloads and tools we used in our analysis, outline memory optimizations in the garbage collector, and show how we reduced memory consumed by V8’s parser and its compilers.


In order to profile V8 and discover optimizations that have impact for the largest number of users, it is crucial to define workloads that are reproducible, meaningful, and simulate common real-world JavaScript usage scenarios. A great tool for this task is Telemetry, a performance testing framework that runs scripted website interactions in Chrome and records all server responses in order to enable predictable replay of these interactions in our test environment. We selected a set of popular news, social, and media websites and defined the following common user interactions for them:

A workload for browsing news and social websites:
  1. Open a popular news or social website, e.g. hackernews.
  2. Click on the first link.
  3. Wait until the new website is loaded.
  4. Scroll down a few pages.
  5. Click the back button.
  6. Click on the next link on the original website and repeat steps 3-6 a few times.
A workload for browsing media website:
  1. Open an item on a popular media website, e.g. a video on YouTube.
  2. Consume that item by waiting for a few seconds.
  3. Click on the next item and repeat steps 2-3 a few times.
Once a workflow is captured, it can be replayed as often as needed against a development version of Chrome, for example each time there is new version of V8. During playback, V8’s memory usage is sampled at fixed time intervals to obtain a meaningful average. The benchmarks can be found here.

Memory Visualization

One of the main challenges when optimizing for performance in general is to get a clear picture of internal VM state to track progress or weigh potential tradeoffs. For optimizing memory consumption, this means keeping accurate track of V8’s memory consumption during execution. There are two categories of memory that must be tracked: memory allocated to V8’s managed heap and memory allocated on the C++ heap. The V8 Heap Statistics feature is a mechanism used by developers working on V8 internals to get deep insight into both. When the --trace-gc-object-stats flag is specified when running Chrome (M54 or newer) or the d8 command line interface, V8 dumps memory-related statistics to the console. We built a custom tool, the v8 heap visualizer, to visualize this output. The tool shows a timeline-based view for both the managed and C++ heaps. The tool also provides a detailed breakdown of the memory usage of certain internal data types and size-based histograms for each of those types.

A common workflow during our optimization efforts involves selecting an instance type that takes up a large portion of the heap in the timeline view, as depicted in Figure 1. Once an instance type is selected, the tool then shows a distribution of uses of this type. In this example we selected V8’s internal FixedArray data structure, which is an untyped vector-like container used ubiquitously in all sorts of places in the VM. Figure 2 shows a typical FixedArray distribution, where we can see that the majority of memory can be attributed to a specific FixedArray usage scenario. In this case FixedArrays are used as the backing store for sparse JavaScript arrays (what we call DICTIONARY_ELEMENTS). With this information it is possible to refer back to the actual code and either verify whether this distribution is indeed the expected behavior or whether an optimization opportunity exists. We used the tool to identify inefficiencies with a number of internal types.

Figure 1: Timeline view of managed heap and off-heap memory

Figure 2: Distribution of instance type

Figure 3 shows C++ heap memory consumption, which consists primarily of zone memory (temporary memory regions used by V8 used for  a short period of time; discussed in more detail below).  Since zone memory is used most extensively by the V8 parser and compilers, the spikes correspond to parsing and compilation events. A well-behaved execution consists only of spikes, indicating that memory is freed as soon as it is no longer needed. In contrast, plateaus (i.e. longer periods of time with higher memory consumption) indicate that there is room for optimization.

Figure 3: Zone memory

Early adopters can also try out the integration into Chrome’s tracing infrastructure. Therefore you need to run the latest Chrome Canary with --track-gc-object-stats and capture a trace including the category v8.gc_stats. The data will then show up as V8.GC_Object_Stats event.

JavaScript Heap Size Reduction

There is an inherent trade-off between garbage collection throughput, latency, and memory consumption. For example, garbage collection latency (which causes user-visible jank) can be reduced by using more memory to avoid frequent garbage collection invocations. For low-memory mobile devices, i.e. devices with under 512M of RAM, prioritizing latency and throughput over memory consumption may result in out-of-memory crashes and suspended tabs on Android.

To better balance the right tradeoffs for these low-memory mobile devices, we introduced a special memory reduction mode which tunes several garbage collection heuristics to lower memory usage of the JavaScript garbage collected heap. 1) At the end of a full garbage collection, V8’s heap growing strategy determines when the next garbage collection will happen based on the amount of live objects with some additional slack. In memory reduction mode, V8 will use less slack resulting in less memory usage due to more frequent garbage collections. 2) Moreover this estimate is treated as a hard limit, forcing unfinished incremental marking work to finalize in the main garbage collection pause. Normally, when not in memory reduction mode, unfinished incremental marking work may result in going over this limit arbitrarily to trigger the main garbage collection pause only when marking is finished. 3) Memory fragmentation is further reduced by performing more aggressive memory compaction.

Figure 4 depicts some of the improvements on low memory devices since Chrome M53. Most noticeably, the average V8 heap memory consumption of the mobile New York Times benchmark reduced by about 66%. Overall, we observed a 50% reduction of average V8 heap size on this set of benchmarks.

Figure 4: V8 heap memory reduction since M53 on low memory devices

Another optimization introduced recently not only reduces memory on low-memory devices but beefier mobile and desktop machines. Reducing the V8 heap page size from 1M to 512KB results in a smaller memory footprint when not many live objects are present and lower overall memory fragmentation up to 2x. It also allows V8 to perform more compaction work since smaller work chunks allow more work to be done in parallel by the memory compaction threads.

Zone Memory Reduction

In addition to the JavaScript heap, V8 uses off-heap memory for internal VM operations. The largest chunk of memory is allocated through memory areas called zones. Zones are a type of  region-based memory allocator which enables fast allocation and bulk deallocation where all zone allocated memory is freed at once when the zone is destroyed. Zones are used throughout V8’s parser and compilers. 

One of the major improvements in M55 comes from reducing memory consumption during background parsing. Background parsing allows V8 to parse scripts while a page is being loaded. The memory visualization tool helped us discover that the background parser would keep an entire zone alive long after the code was already compiled. By immediately freeing the zone after compilation, we reduced the lifetime of zones significantly which resulted in reduced average and peak memory usage.

Another improvement results from better packing of fields in abstract syntax tree nodes generated by the parser. Previously we relied on the C++ compiler to pack fields together where possible. For example, two booleans just require two bits and should be located within one word or within the unused fraction of the previous word. The C++ compiler doesn’t not always find the most compressed packing, so we instead manually pack bits. This not only results in reduced peak memory usage, but also improved parser and compiler performance.

Figure 5 shows the peak zone memory improvements since M54 which reduced by about 40% on average over the measured websites. 

Figure 5: V8 peak zone memory reduction since M54 on desktop

Over the next months we will continue our work on reducing the memory footprint of V8. We have more zone memory optimizations planned for the parser and we plan to focus on devices ranging from 512M-1G of memory.

Update: All the improvements discussed above reduce the Chrome 55 overall memory consumption by up to 35% on low-memory devices compared to Chrome 53.  Other device segments will only benefit from the zone memory improvements.

Posted by the V8 Memory Sanitation Engineers Ulan Degenbaev, Michael Lippautz, Hannes Payer, and Toon Verwaest.

Friday, September 9, 2016

V8 Release 5.4

Every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s git master immediately before a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 5.4, which will be in beta until it is released in coordination with Chrome 54 Stable in several weeks. V8 5.4 is filled will all sorts of developer-facing goodies, so we’d like to give you a preview of some of the highlights in anticipation of the release.

Performance Improvements

V8 5.4 delivers a number of key improvements in memory footprint and startup speed. These primarily help accelerate initial script execution and reduce page load in Chrome.


When measuring V8’s memory consumption, two metrics are very important to monitor and understand: Peak memory consumption and average memory consumption. Typically, reducing peak consumption is just as important as reducing average consumption since an executing script that exhausts available memory even for a brief moment can cause an Out of Memory crash, even if its average memory consumption is not very high. For optimization purposes, it’s useful to divide V8's memory into two categories: On-heap memory containing actual JavaScript objects and off-heap memory containing the rest, such as internal data structures allocated by the compiler, parser and garbage collector.

In 5.4 we tuned V8’s garbage collector for low-memory devices with 512 MB RAM or less. Depending on the website displayed this reduces peak memory consumption of on-heap memory up to 40%.

Memory management inside V8’s JavaScript parser was simplified to avoid unnecessary allocations, reducing off-heap peak memory usage by up to 20%. This memory savings is especially helpful in reducing memory usage of large script files, including asm.js applications.

Startup & speed

Our work to streamline V8's parser not only helped reduce memory consumption, it also improved the parser's runtime performance. This streamlining, combined with other optimizations of JavaScript builtins and how accesses of properties on JavaScript objects use global inline caches, resulted in notable startup performance gains.

Our internal startup test suite that measures real-world JavaScript performance improved by a median of 5%. The Speedometer benchmark also benefits from these optimizations, improving by ~ 10 to 13% compared to 5.2.
~ 13% reduction on Speedometer/Mac


Please check out our summary of API changes. This document is regularly updated a few weeks after each major release.

Developers with an active V8 checkout can use 'git checkout -b 5.4 -t branch-heads/5.4' to experiment with the new features in V8 5.4. Alternatively you can subscribe to Chrome's Beta channel and try the new features out yourself soon.

Posted by the V8 team

Tuesday, August 23, 2016

Firing up the Ignition Interpreter

V8 and other modern JavaScript engines get their speed via just-in-time (JIT) compilation of script to native machine code immediately prior to execution. Code is initially compiled by a baseline compiler, which can generate non-optimized machine code quickly. The compiled code is analyzed during runtime and optionally re-compiled dynamically with a more advanced optimizing compiler for peak performance. In V8, this script execution pipeline has a variety of special cases and conditions which require complex machinery to switch between the baseline compiler and two optimizing compilers, Crankshaft and TurboFan.

One of the issues with this approach (in addition to architectural complexity) is that the JITed machine code can consume a significant amount of memory, even if the code is only executed once. In order to mitigate this overhead, the V8 team has built a new JavaScript interpreter, called Ignition, which can replace V8’s baseline compiler, executing code with less memory overhead and paving the way for a simpler script execution pipeline.

With Ignition, V8 compiles JavaScript functions to a concise bytecode, which is between 50% to 25% the size of the equivalent baseline machine code. This bytecode is then executed by a high-performance interpreter which yields execution speeds on real-world websites close to those of code generated by V8’s existing baseline compiler.

In Chrome 53, Ignition will be enabled for Android devices which have limited RAM (512 MB or less), where memory savings are most needed. Results from early experiments in the field show that Ignition reduces the memory of each Chrome tab by around 5%.

V8’s compilation pipeline with Ignition enabled.


In building Ignition’s bytecode interpreter, the team considered a number of potential implementation approaches. A traditional interpreter, written in C++ would not be able to interact efficiently with the rest of V8’s generated code. An alternative would have been to hand-code the interpreter in assembly code, however given V8 supports nine architecture ports, this would have entailed substantial engineering overhead.

Instead, we opted for an approach which leveraged the strength of TurboFan, our new optimizing compiler, which is already tuned for optimal interaction with the V8 runtime and other generated code. The Ignition interpreter uses TurboFan’s low-level, architecture-independent macro-assembly instructions to generate bytecode handlers for each opcode. TurboFan compiles these instructions to the target architecture, performing low-level instruction selection and machine register allocation in the process. This results in highly optimized interpreter code which can execute the bytecode instructions and interact with the rest of the V8 virtual machine in a low-overhead manner, with a minimal amount of new machinery added to the codebase.

Ignition is a register machine, with each bytecode specifying its inputs and outputs as explicit register operands, as opposed to a stack machine where each bytecode would consume inputs and push outputs on an implicit stack. A special accumulator register is an implicit input and output register for many bytecodes. This reduces the size of bytecodes by avoiding the need to specify specific register operands. Since many JavaScript expressions involve chains of operations which are evaluated from left to right, the temporary results of these operations can often remain in the accumulator throughout the expression’s evaluation, minimizing the need for operations which load and store to explicit registers.

As the bytecode is generated, it passes through a series of inline-optimization stages. These stages perform simple analysis on the bytecode stream, replacing common patterns with faster sequences, remove some redundant operations, and minimize the number of unnecessary register loads and transfers. Together, the optimizations further reduce the size of the bytecode and improve performance.

For further details on the implementation of Ignition, see our BlinkOn talk:


Our focus for Ignition up until now has been to reduce V8’s memory overhead. However, adding Ignition to our script execution pipeline opens up a number of future possibilities. The Ignition pipeline has been designed to enable us to make smarter decisions about when to execute and optimize code to speed up loading web pages and reduce jank and to make the interchange between V8’s various components more efficient.

Stay tuned for future developments in Ignition and V8.

by Ross McIlroy, V8 Ignition Jump Starter

Thursday, July 21, 2016

V8 at the BlinkOn 6 conference

BlinkOn is a biannual meeting of Blink, V8, and Chromium contributors. BlinkOn 6 was held in Munich on June 16 and June 17. The V8 team gave a number of presentations on architecture, design, performance initiatives, and language implementation.

The V8 BlinkOn talks are embedded below.

Real-world JavaScript Performance

Length: 31:41

Outlines the history of how V8 measures JavaScript performance, the different eras of benchmarking, and a new technique to measure page loads across real-world, popular websites with detailed breakdowns of time per V8 component.

Ignition: an interpreter for V8

Length: 36:39

Introduces V8’s new Ignition Interpreter, explaining the architecture of the engine as a whole, and how Ignition affects memory usage and startup performance.

How we measure and optimize for RAIL in V8’s GC

Length: 27:11

Explains how V8 uses the Response, Animation, Idle, Loading (RAIL) metrics to target low-latency garbage collection and the recent optimizations we’ve made to reduce jank on mobile.

ECMAScript 2015 and Beyond

Length: 28:52

Provides an update on the implementation of new language features in V8, how those features integrate with the web platform, and the standards process which continues to evolve the ECMAScript language.

Tracing Wrappers from V8 to Blink (Lightning Talk)

Length: 2:31

Highlights tracing wrappers between V8 and Blink objects and how they help prevent memory leaks and reduce latency.

Monday, July 18, 2016

V8 Release 5.3

Roughly every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s git master immediately before Chrome branches for a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 5.3, which will be in beta until it is released in coordination with Chrome 53 Stable. V8 5.3 is filled will all sorts of developer-facing goodies, so we’d like to give you a preview of some of the highlights in anticipation of the release in several weeks.


New Ignition Interpreter

Ignition, V8's new interpreter, is feature complete and will be enabled in Chrome 53 for low-memory Android devices. The interpreter brings immediate memory savings for JIT'ed code and will allow V8 to make future optimizations for faster startup during code execution. Ignition works in tandem with V8's existing optimizing compilers (TurboFan and Crankshaft) to ensure that “hot” code is still optimized for peak performance. We are continuing to improve interpreter performance and hope to enable Ignition soon on all platforms, mobile and desktop. Look for an upcoming blog post for more information about Ignition’s design, architecture, and performance gains. Embedded versions of V8 can turn on the Ignition interpreter with the flag --ignition.

Reduced jank

V8 version 5.3 includes various changes to reduce application jank and garbage collection times. These changes include:
  • Optimizing weak global handles to reduce the time spent handling external memory
  • Unifying the heap for full garbage collections to reduce evacuation jank
  • Optimizing V8’s black allocation additions to the garbage collection marking phase
Together, these improvements reduce full garbage collection pause times by about 25%, measured while browsing a corpus of popular webpages. For more detail on recent garbage collection optimizations to reduce jank, see the “Jank Busters” blog posts Part 1 & Part 2.


Improving page startup time

The V8 team recently began tracking performance improvements against a corpus of 25 real-world website page loads (including popular sites such as Facebook, Reddit, Wikipedia, and Instagram). Between V8 5.1 (measured in Chrome 51 from April) and V8 5.3 (measured in a recent Chrome Canary 53) we improved startup time in aggregate across the measured websites by ~7%. These improvements loading real websites mirrored similar gains on the Speedometer benchmark, which ran 14% faster in V8 5.3. For more details about our new testing harness, runtime improvements, and breakdown analysis of where V8 spends time during page loads, see our upcoming blog post on startup performance.

ES6 Promise performance

V8's performance on the Bluebird ES6 Promise benchmark suite improved by 20-40% in V8 version 5.3, varying by architecture and benchmark.

V8 Promise performance over time on a Nexus 5x


Please check out our summary of API changes. This document gets regularly updated a few weeks after each major release.

Developers with an active V8 checkout can use 'git checkout -b 5.3 -t branch-heads/5.3' to experiment with the new features in V8 5.3. Alternatively you can subscribe to Chrome's Beta channel and try the new features out yourself soon.

Posted by the V8 team

Saturday, June 4, 2016

V8 Release 5.2

Roughly every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s git master immediately before Chrome branches for a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 5.2, which will be in beta until it is released in coordination with Chrome 52 Stable. V8 5.2 is filled will all sorts of developer-facing goodies, so we’d like to give you a preview of some of the highlights in anticipation of the release in several weeks.

ES6 & ES7 support

V8 5.2 contains support for ECMAScript 6 (aka ES2015) and ECMAScript 7 (aka ES2016).

Exponentiation operator

This release contains support for the ES7 exponentiation operator, an infix notation to replace Math.pow.
let n = 3**3; // n == 27
n **= 2; // n == 729

Evolving spec

For more information on the complexities behind support for evolving specifications and continued standards discussion around web compatibility bugs and tail calls, see the V8 blog post ES6, ES7, and beyond.


V8 5.2 contains further optimizations to improve the performance of JavaScript built-ins, including improvements for Array operations like the isArray method, the in operator, and Function.prototype.bind. This is part of ongoing work to speed up built-ins based on new analysis of runtime call statistics on popular web pages. For more information, see the V8 Google I/O 2016 talk and look for an upcoming blog post on performance optimizations gleaned from real-world websites.


Please check out our summary of API changes. This document gets regularly updated a few weeks after each major release.

Developers with an active V8 checkout can use 'git checkout -b 5.2 -t branch-heads/5.2' to experiment with the new features in V8 5.2. Alternatively you can subscribe to Chrome's Beta channel and try the new features out yourself soon.

Posted by the V8 team

Friday, April 29, 2016

ES6, ES7, and beyond

The V8 team places great importance on the evolution of JavaScript into an increasingly expressive and well-defined language that makes writing fast, safe, and correct web applications easy. In June 2015, the ES6 specification was ratified by the TC39 standards committee, making it the largest single update to the JavaScript language. New features include classes, arrow functions, promises, iterators / generators, proxies, well-known symbols, and additional syntactic sugar. TC39 has also increased the cadence of new specifications and released the candidate draft for ES7 in February 2016, to be ratified this summer. While not as expansive as the ES6 update due to the shorter release cycle, ES7 notably introduces the exponentiation operator and Array.prototype.includes().

Today we’ve reached an important milestone: V8 supports ES6 and ES7. You can use the new language features today in Chrome Canary, and they will ship by default in the M52 release of Chromium.

Given the nature of an evolving spec, the differences between various types of conformance tests, and the complexity of maintaining web compatibility, it can be difficult to determine when a certain version of ECMAScript is considered fully supported by a JavaScript engine. Read on for why spec support is more nuanced than version numbers, why proper tail calls are still under discussion, and what caveats remain at play.

An evolving spec

When TC39 decided to publish more frequent updates to the JavaScript specification, the most up-to-date version of the language became the master, draft version. Although versions of the ECMAScript spec are still produced yearly and ratified, V8 implements a combination of the most recently ratified version (e.g. ES6), certain features which are close enough to standardization that they are safe to implement (e.g. the exponentiation operator and Array.prototype.includes() from the ES7 candidate draft), and a collection of bug fixes and web compatibility amendments from more recent drafts. Part of the rationale for such an approach is that language implementations in browsers should match the specification, even if the it’s the specification that needs to be updated. In fact, the process of implementing a ratified version of the spec often uncovers many of the fixes and clarifications that comprise the next version of the spec.

Currently shipping parts of the evolving ECMAScript specification

For example, when implementing the ES6 RegExp sticky flag, the V8 team discovered that the semantics of the ES6 spec broke many existing sites (including all sites using versions 2.x.x of the the popular XRegExp library on npm). Since compatibility is a cornerstone of the web, engineers from the V8 and Safari JavaScriptCore teams proposed an amendment to the RegExp specification to fix the breakage, which was agreed upon by TC39. The amendment won't appear in a ratified version until ES8, but it's still a part of the ECMAScript language and we've implemented it in order to ship the RegExp sticky flag.

The continual refinement of the language specification and the fact that each version (including the yet-to-be-ratified draft) replaces, amends, and clarifies previous versions makes it tricky to understand the complexities behind ES6 and ES7 support. While it's impossible to state succinctly, it's perhaps most accurate to say that V8 supports compliance with the “continually maintained draft future ECMAScript standard”!

Measuring conformance

In an attempt to make sense of this specification complexity, there are a variety of ways to measure JavaScript engine compatibility with the ECMAScript standard. The V8 team, as well as other browser vendors, use the test262 test suite as the gold standard of conformance to the continually maintained draft future ECMAScript standard. This test suite is continually updated to match the spec and it provides 16,000 discrete functional tests for all the features and edge cases which make up a compatible, compliant implementation of JavaScript. Currently V8 passes approximately 98% of test262, and the remaining 2% are a handful of edge cases and future ES features not yet ready to be shipped.

Since it’s difficult to skim the enormous number of test262 tests, other conformance tests exist, such as the Kangax compatibility table. Kangax makes it easy to skim to see whether a particular feature (like arrow functions) has been implemented in a given engine, but doesn’t test all the conformance edge cases that test262 does. Currently, Chrome Canary scores a 98% on the Kangax table for ES6 and 100% on the sections of Kangax corresponding to ES7 (e.g. the sections labelled “2016 features” and “2016 misc” under the ESnext tab).

The remaining 2% of the Kangax ES6 table tests proper tail calls, a feature which has been implemented in V8, but deliberately turned off in Chrome Canary due to outstanding developer experience concerns detailed below. With the “Experimental JavaScript features” flag enabled, which forces this feature on, Canary scores 100% on the entirety of the Kangax table for ES6.

Proper Tail Calls

Proper tail calls have been implemented but not yet shipped given that a change to the feature is currently under discussion at TC39. ES6 specifies that strict mode function calls in tail position should never cause a stack overflow. While this is a useful guarantee for certain programming patterns, the current semantics have two problems. First, since the tail call elimination is implicit, it can be difficult for programmers to identify which functions are actually in tail call position. This means that developers may not discover misplaced attempted tail calls in their programs until they overflow the stack. Second, implementing proper tail calls requires eliding tail call stack frames from the stack, which loses information about execution flow. This in turn has two consequences:
  1. It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities and
  2. Error.prototype.stack contains less information about execution flow which may break telemetry software that collects and analyzes client-side errors.
Implementing a shadow stack can improve the readability of call stacks, but the V8 and DevTools teams believe that debugging is easiest, most reliable, and most accurate when the stack displayed during debugging is completely deterministic and always matches the true state of the actual virtual machine stack. Moreover, a shadow stack is too expensive performance-wise to turn on all the time.

For these reasons, the V8 team strongly support denoting proper tail calls by special syntax. There is a pending TC39 proposal called syntactic tail calls to specify this behavior, co-championed by committee members from Mozilla and Microsoft. We have implemented and staged proper tail calls as specified in ES6 and started implementing syntactic tail calls as specified in the new proposal. The V8 team plans to resolve the issue at the next TC39 meeting before shipping implicit proper tail calls or syntactic tail calls by default. You can test out each version in the meantime by using the V8 flags --harmony-tailcalls and --harmony-explicit-tailcalls.


One of the most exciting promises of ES6 is support for JavaScript modules to organize and separate different parts of an application into namespaces. ES6 specifies import and export declarations for modules, but not how modules are loaded into a JavaScript program. In the browser, loading behavior was recently specified by the new <script type="module"> tag. Although additional standardization work is needed to specify advanced dynamic module-loading APIs, Chromium support for module script tags is already in development. You can track implementation work on the launch bug and read more about experimental loader API ideas in the whatwg/loader repository.

ESnext and beyond

In the future, developers can expect ECMAScript updates to come in smaller, more frequent updates with shorter implementation cycles. The V8 team is already working to bring upcoming features such as async / await keywords, Object.values() / Object.entries(), String.prototype.padStart() / String.prototype.padEnd() and RegExp lookbehind to the runtime. Check back for more updates on our ESnext implementation progress and performance optimizations for existing ES6 and ES7 features.

We strive to continue evolving JavaScript and strike the right balance of implementing new features early, ensuring compatibility and stability of the existing web, and providing TC39 implementation feedback around design concerns. We look forward to seeing the incredible experiences developers will build with these new features.

-- Posted by the V8 team, ECMAScript Enthusiasts