Skip to main content

Motoko update February 20, 2023

by Kento Sugama

Header image

Hey Motoko devs!

It’s time for a new Motoko update post!

If you missed the last update post from the languages team, check it out here! Last time we talked about catching errors on message sends, and a new version of Candid.

Today we’ll be talking about incremental garbage collection, and VSCode extension performance improvements! Bonus section about using GitHub copilot/GPT3 with Motoko.

Incremental GC: presentation at global R&D!

Header image

@FIXMETAGLUC has been hard at work over the past many months putting together a new garbage collector for Motoko. This incremental GC is designed to alleviate scalability issues with the language by more eagerly collecting unused objects, and avoiding runs of the GC that fully sweep the heap.

And tomorrow, Luc will be presenting on this project live at this month’s public global R&D presentation!

When: February 22, 2023 / 17:30 CET / 8:30 PDT Where:

Register at the link above if you haven’t already! Also, this meeting will be recorded and posted on our YouTune channel in case you can’t make the live presentation.

If you want to see detailed discussion about this feature, checkout the PR here.

See you guys tomorrow!

VSCode extension optimizations

Header image

The VSCode extension now runs about 10x - 100x faster on large Motoko projects!

The performance improvements come from smarter selections of which files to type check. Instead of type-checking every file in the workspace, the extension now only checks open files and dfx.json canister entry points (along with all imported files). This was also layered with other optimizations (such as caching diagnostics) to reduce the amount of messages being passed between VS Code and the language server for large projects.

This optimization is released as an experimental feature, so please let us know if you run into any bugs!

Bonus: GitHub Copilot x Motoko

Recently, I’ve been playing around with GitHub’s generative AI coding tool called GitHub Copilot, at the suggestion of @FIXMEKYLE. The tool is a wrapper around GPT3, a predecessor model to the recently famous ChatGPT. As you type code, Copilot will anticipate your intentions, and non-intrusively suggest syntax that you can seamlessly accept or reject as you work. I’ve found it to be very useful in increasing my development speed, even with a new language such as Motoko, for which the model would presumably have little training data. I thought I’d share my experience here in case you guys find it useful as well.

Find the tool here:

Note that for non-open source projects, it seems that the tool may be collecting data on your repository/code so please consider this before using the tool.

Till next time!

– DFINITY Languages team