CSS Anchor Positioning Guide

Learn about CSS Anchor Positioning, including its syntax, properties, how it is used to position one element next to another, and even how it’s used to resize elements relative to other elements.

CSS Anchor Positioning Guide originally published on CSS-Tricks, which is part of the DigitalOcean family….

Tal Kreisler, CEO & Co-Founder of NoTraffic – Interview Series

Tal Kreisler is the CEO and Co-Founder of NoTraffic, a platform that digitizes road infrastructure management, allowing cities to manage their entire grid at the push of a button. NoTraffic transforms intersections into intelligent hubs capable of real-time traffic management. Their AI-powered platform leverages edge computing…

How MNOs Are Leveraging AI to Revolutionize the Telecom Industry

For more than three decades, mobile network operators (MNOs) have been channeling their research and development efforts into five key areas: messaging, roaming, policy, signaling, and clearing. Given the vast quantities of data processed through these systems, it’s only natural that MNOs are increasingly focusing on…

How to Set-up Cellular Bonding with Kiloview P3 and P3 Mini & How To M – Videoguys

On this week’s Videoguys Live, James is joined by Logan, our Workflow Sales Specialist, to explain how to set up and utilize the new P3 and P3 Mini as well as going into how Kiloview’s NDI Products and Converters can be used to manage your NDI Workflow.

Watch the full video below:

[embedded content]


Kiloview P3 Mini 4G Wireless Bonding Encoder

  • Up to 6-channel Connections Bonding
  • 4K HDMI+3G-SDI with H.265 & H.264 Encoding
  • NDI | HX, RTMP, SRT, RTSP, HLS
  • 4.3-inch Touch LCD Screen
  • Dual Battery Module
  • KiloLink Server – Safe and flexible bonding software
  • Pro-Level Video Processing
  • Reliable Recording and Streaming

Step 1: Choose the P3 or P3 mini

Features

P3

P3 Mini

Video Input

1x 4KP30 HDMI

1x 3G-SDI

1x 1080P60 HDMI

1x 3G-SDI

Resolution

Up to 4KP30

Up to 1080P60

Recording

SD Card and USB Expansion

USB Expansion

Network

5G / 4G / WiFi / Ethernet

4G / WiFi / Ethernet

Display

4.3″ LCD Touchscreen

3″ LCD Touchscreen

Bonding Performance

WiFi/ 4 channels of 5G(Or 4 channels of 4G)
cellular, WiFi (2.4G/5G), Ethernet

3CH 4G cellular network, WiFi (2.4G/5G dual band) , USB expanded Ethernet

Battery

Built-in 3500mAh 7.2V 25.2W battery
External 7000mAh 7.2V 50.4W battery

4900mAh 7.2v 35.28W

Step 2: Make Sure You’re Connected

P3 Mini

On board Modems included

P3 Modems Kits

Step 3: Set-Up Your KiloLink Server

Kiloview P3 Workflow

Kiloview P3 Mini Workflow


How To Manage Your NDI Production​ System with Kiloview​

NDI Converters allow you to put ANY video source on the network

  • NDI Encoders turn any HDMI or SDI camcorder into an NDI camera
  • NDI Decoders allow you to send any source to any screen
  • Kiloview bi-directional converters give you the flexibility for every application

Why should you migrate to NDI

  • Less cables!
  • Power, Control, and Video all through one Ethernet cable
  • Control and send data back to the NDI devices – PTZ Control, color shading, tally, comms, etc.
  • Every NDI device can see every other NDI device on the network
  • Replaces using SDI or HDMI cabling
  • Cost effective solution that allows you to expand and scale your productions with ease
  • Use NDI enabled cameras or add NDI converters
  • Use NDI decoders to deliver video anywhere on the network

Kiloview NDI Products are Your Production Solution

  • NDI is fast, efficient and easy to use. 
  • It’s a low cost, multi-channel solution that is easy to deploy and maintain. 
  • NDI supports tally, voice intercom, video return, and PTZ control. 
  • NDI is a total production solution.

Kiloview E3

Introducing the next generation video encoder for live, post, and REMI workflows

Dual channel 4K HDMI & 3G SDI encoder with flexibility for all workflow solutions

Kiloview CUBE R1

Multiview and record all your NDI sources

9 channel HD, 4 channel 4K multiviewer and ISO recording

Kiloview CUBE X1

Turnkey NDI distribution solution

Switch and distribute NDI with 13 input channels and 26 output channels

Kiloview N6 and N5 – Bi-Directional NDI conversion with 2-in1 Encoding and Decoding

Kiloview N60 and N50 – Bi-Directional Encoder now with NDI and SRT

How AI is improving simulations with smarter sampling techniques

Imagine you’re tasked with sending a team of football players onto a field to assess the condition of the grass (a likely task for them, of course). If you pick their positions randomly, they might cluster together in some areas while completely neglecting others. But if you give them a strategy, like spreading out uniformly across the field, you might get a far more accurate picture of the grass condition.

Now, imagine needing to spread out not just in two dimensions, but across tens or even hundreds. That’s the challenge MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers are getting ahead of. They’ve developed an AI-driven approach to “low-discrepancy sampling,” a method that improves simulation accuracy by distributing data points more uniformly across space.

A key novelty lies in using graph neural networks (GNNs), which allow points to “communicate” and self-optimize for better uniformity. Their approach marks a pivotal enhancement for simulations in fields like robotics, finance, and computational science, particularly in handling complex, multidimensional problems critical for accurate simulations and numerical computations.

“In many problems, the more uniformly you can spread out points, the more accurately you can simulate complex systems,” says T. Konstantin Rusch, lead author of the new paper and MIT CSAIL postdoc. “We’ve developed a method called Message-Passing Monte Carlo (MPMC) to generate uniformly spaced points, using geometric deep learning techniques. This further allows us to generate points that emphasize dimensions which are particularly important for a problem at hand, a property that is highly important in many applications. The model’s underlying graph neural networks lets the points ‘talk’ with each other, achieving far better uniformity than previous methods.”

Their work was published in the September issue of the Proceedings of the National Academy of Sciences.

Take me to Monte Carlo

The idea of Monte Carlo methods is to learn about a system by simulating it with random sampling. Sampling is the selection of a subset of a population to estimate characteristics of the whole population. Historically, it was already used in the 18th century,  when mathematician Pierre-Simon Laplace employed it to estimate the population of France without having to count each individual.

Low-discrepancy sequences, which are sequences with low discrepancy, i.e., high uniformity, such as Sobol’, Halton, and Niederreiter, have long been the gold standard for quasi-random sampling, which exchanges random sampling with low-discrepancy sampling. They are widely used in fields like computer graphics and computational finance, for everything from pricing options to risk assessment, where uniformly filling spaces with points can lead to more accurate results. 

The MPMC framework suggested by the team transforms random samples into points with high uniformity. This is done by processing the random samples with a GNN that minimizes a specific discrepancy measure.

One big challenge of using AI for generating highly uniform points is that the usual way to measure point uniformity is very slow to compute and hard to work with. To solve this, the team switched to a quicker and more flexible uniformity measure called L2-discrepancy. For high-dimensional problems, where this method isn’t enough on its own, they use a novel technique that focuses on important lower-dimensional projections of the points. This way, they can create point sets that are better suited for specific applications.

The implications extend far beyond academia, the team says. In computational finance, for example, simulations rely heavily on the quality of the sampling points. “With these types of methods, random points are often inefficient, but our GNN-generated low-discrepancy points lead to higher precision,” says Rusch. “For instance, we considered a classical problem from computational finance in 32 dimensions, where our MPMC points beat previous state-of-the-art quasi-random sampling methods by a factor of four to 24.”

Robots in Monte Carlo

In robotics, path and motion planning often rely on sampling-based algorithms, which guide robots through real-time decision-making processes. The improved uniformity of MPMC could lead to more efficient robotic navigation and real-time adaptations for things like autonomous driving or drone technology. “In fact, in a recent preprint, we demonstrated that our MPMC points achieve a fourfold improvement over previous low-discrepancy methods when applied to real-world robotics motion planning problems,” says Rusch.

“Traditional low-discrepancy sequences were a major advancement in their time, but the world has become more complex, and the problems we’re solving now often exist in 10, 20, or even 100-dimensional spaces,” says Daniela Rus, CSAIL director and MIT professor of electrical engineering and computer science. “We needed something smarter, something that adapts as the dimensionality grows. GNNs are a paradigm shift in how we generate low-discrepancy point sets. Unlike traditional methods, where points are generated independently, GNNs allow points to ‘chat’ with one another so the network learns to place points in a way that reduces clustering and gaps — common issues with typical approaches.”

Going forward, the team plans to make MPMC points even more accessible to everyone, addressing the current limitation of training a new GNN for every fixed number of points and dimensions.

“Much of applied mathematics uses continuously varying quantities, but computation typically allows us to only use a finite number of points,” says Art B. Owen, Stanford University professor of statistics, who wasn’t involved in the research. “The century-plus-old field of discrepancy uses abstract algebra and number theory to define effective sampling points. This paper uses graph neural networks to find input points with low discrepancy compared to a continuous distribution. That approach already comes very close to the best-known low-discrepancy point sets in small problems and is showing great promise for a 32-dimensional integral from computational finance. We can expect this to be the first of many efforts to use neural methods to find good input points for numerical computation.”

Rusch and Rus wrote the paper with University of Waterloo researcher Nathan Kirk, Oxford University’s DeepMind Professor of AI and former CSAIL affiliate Michael Bronstein, and University of Waterloo Statistics and Actuarial Science Professor Christiane Lemieux. Their research was supported, in part, by the AI2050 program at Schmidt Futures, Boeing, the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator, the Swiss National Science Foundation, Natural Science and Engineering Research Council of Canada, and an EPSRC Turing AI World-Leading Research Fellowship. 

An interstellar instrument takes a final bow

They planned to fly for four years and to get as far as Jupiter and Saturn. But nearly half a century and 15 billion miles later, NASA’s twin Voyager spacecraft have far exceeded their original mission, winging past the outer planets and busting out of our heliosphere, beyond the influence of the sun. The probes are currently making their way through interstellar space, traveling farther than any human-made object.

Along their improbable journey, the Voyagers made first-of-their-kind observations at all four giant outer planets and their moons using only a handful of instruments, including MIT’s Plasma Science Experiments — identical plasma sensors that were designed and built in the 1970s in Building 37 by MIT scientists and engineers.

The Plasma Science Experiment (also known as the Plasma Spectrometer, or PLS for short) measured charged particles in planetary magnetospheres, the solar wind, and the interstellar medium, the material between stars. Since launching on the Voyager 2 spacecraft in 1977, the PLS has revealed new phenomena near all the outer planets and in the solar wind across the solar system. The experiment played a crucial role in confirming the moment when Voyager 2 crossed the heliosphere and moved outside of the sun’s regime, into interstellar space.

Now, to conserve the little power left on Voyager 2 and prolong the mission’s life, the Voyager scientists and engineers have made the decision to shut off MIT’s Plasma Science Experiment. It’s the first in a line of science instruments that will progressively blink off over the coming years. On Sept. 26, the Voyager 2 PLS sent its last communication from 12.7 billion miles away, before it received the command to shut down.

MIT News spoke with John Belcher, the Class of 1922 Professor of Physics at MIT, who was a member of the original team that designed and built the plasma spectrometers, and John Richardson, principal research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, who is the experiment’s principal investigator. Both Belcher and Richardson offered their reflections on the retirement of this interstellar piece of MIT history.

Q: Looking back at the experiment’s contributions, what are the greatest hits, in terms of what MIT’s Plasma Spectrometer has revealed about the solar system and interstellar space?

Richardson: A key PLS finding at Jupiter was the discovery of the Io torus, a plasma donut surrounding Jupiter, formed from sulphur and oxygen from Io’s volcanos (which were discovered in Voyager images). At Saturn, PLS found a magnetosphere full of water and oxygen that had been knocked off of Saturn’s icy moons. At Uranus and Neptune, the tilt of the magnetic fields led to PLS seeing smaller density features, with Uranus’ plasma disappearing near the planet. Another key PLS observation was of the termination shock, which was the first observation of the plasma at the largest shock in the solar system, where the solar wind stopped being supersonic. This boundary had a huge drop in speed and an increase in the density and temperature of the solar wind. And finally, PLS documented Voyager 2’s crossing of the heliopause by detecting a stopping of outward-flowing plasma. This signaled the end of the solar wind and the beginning of the local interstellar medium (LISM). Although not designed to measure the LISM, PLS constantly measured the interstellar plasma currents beyond the heliosphere. It is very sad to lose this instrument and data!

Belcher: It is important to emphasize that PLS was the result of decades of development by MIT Professor Herbert Bridge (1919-1995) and Alan Lazarus (1931-2014). The first version of the instrument they designed was flown on Explorer 10 in 1961. And the most recent version is flying on the Solar Probe, which is collecting measurements very close to the sun to understand the origins of solar wind. Bridge was the principal investigator for plasma probes on spacecraft which visited the sun and every major planetary body in the solar system.

Q: During their tenure aboard the Voyager probes, how did the plasma sensors do their job over the last 47 years?

Richardson: There were four Faraday cup detectors designed by Herb Bridge that measured currents from ions and electrons that entered the detectors. By measuring these particles at different energies, we could find the plasma velocity, density, and temperature in the solar wind and in the four planetary magnetospheres Voyager encountered. Voyager data were (and are still) sent to Earth every day and received by NASA’s deep space network of antennae. Keeping two 1970s-era spacecraft going for 47 years and counting has been an amazing feat of JPL engineering prowess — you can google the most recent rescue when Voyager 1 lost some memory in November of 2023 and stopped sending data. JPL figured out the problem and was able to reprogram the flight data system from 15 billion miles away, and all is back to normal now. Shutting down PLS involves sending a command which will get to Voyager 2 about 19 hours later, providing the rest of the spacecraft enough power to continue.

Q: Once the plasma sensors have shut down, how much more could Voyager do, and how far might it still go?

Richardson: Voyager will still measure the galactic cosmic rays, magnetic fields, and plasma waves. The available power decreases about 4 watts per year as the plutonium which powers them decays. We hope to keep some of the instruments running until the mid-2030s, but that will be a challenge as power levels decrease.

Belcher: Nick Oberg at the Kapteyn Astronomical Institute in the Netherlands has made an exhaustive study of the future of the spacecraft, using data from the European Space Agency’s spacecraft Gaia. In about 30,000 years, the spacecraft will reach the distance to the nearest stars. Because space is so vast, there is zero chance that the spacecraft will collide directly with a star in the lifetime of the universe. However, the spacecraft surface will erode by microcollisions with vast clouds of interstellar dust, but this happens very slowly. 

In Oberg’s estimate, the Golden Records [identical records that were placed aboard each probe, that contain selected sounds and images to represent life on Earth] are likely to survive for a span of over 5 billion years. After those 5 billion years, things are difficult to predict, since at this point, the Milky Way will collide with its massive neighbor, the Andromeda galaxy. During this collision, there is a one in five chance that the spacecraft will be flung into the intergalactic medium, where there is little dust and little weathering. In that case, it is possible that the spacecraft will survive for trillions of years. A trillion years is about 100 times the current age of the universe. The Earth ceases to exist in about 6 billion years, when the sun enters its red giant phase and engulfs it.

In a “poor man’s” version of the Golden Record, Robert Butler, the chief engineer of the Plasma Instrument, inscribed the names of the MIT engineers and scientists who had worked on the spacecraft on the collector plate of the side-looking cup. Butler’s home state was New Hampshire, and he put the state motto, “Live Free or Die,” at the top of the list of names. Thanks to Butler, although New Hampshire will not survive for a trillion years, its state motto might. The flight spare of the PLS instrument is now displayed at the MIT Museum, where you can see the text of Butler’s message by peering into the side-looking sensor. 

Those Non-Design Technologies Web Designers Need to Know – Speckyboy

We call ourselves web designers and developers. However, the job often goes beyond those narrow margins.

Freelancers and small agencies deal with a range of non-design and coding issues. We become the first person our clients contact when they have a question. It happens – even when we aren’t directly involved with the subject matter.

  • I just received this message from Google. What does it mean?
  • Why can’t I receive email from my website?
  • My website was hacked. Help!

Yes, we are the catch-all technical support representatives. No matter the problem, web designers are the solution. That’s what some clients think, at least.

We’re often the link between clients and technology. And perhaps we shouldn’t try to tackle every problem. But it wouldn’t hurt to brush up on a few non-design technologies.

With that in mind, here are a few areas that web designers should study. You know, just in case.


SEO & Site Indexing Basics

Search engine optimization (SEO) is a niche unto itself. Some professionals specialize in making sure websites are indexed and rank well.

That doesn’t stop clients from asking their web designer, though. Site owners want to rank highly in Google search results. And they are often in the dark about how to do it.

To that end, it’s worth learning the basics of SEO. Even if the subject makes your skin crawl.

You’ll be able to explain the hows and whys to clients. That will help them make more informed decisions about content. They may decide to jump in feet first with an SEO professional.

Clients will ask you about SEO. A little background knowledge makes you look smart!

SEO Resources

Understanding how search engines work can benefit you and your clients.

DNS & Email Delivery

Launching or moving a website often includes changing a domain’s DNS settings. These settings ensure that the site directs users to the right place.

DNS is much more than that, though. There are also settings for configuring email as well. That has become a hot topic these days.

Email providers are increasingly requiring domain owners to verify their properties. Domains without DKIM, DMARC, or SPF records may have email delivery issues. For example, Gmail blocks email from unauthenticated domains.

What does this have to do with web design? Well, websites with contact forms can fall victim to these issues. The same goes for eCommerce websites. An unauthenticated domain means clients and users will miss these emails.

Now is the time to learn how DNS works. You’ll want to pay special attention to email. Clients without an IT department may need your help ensuring smooth email delivery.

DNS & Email Resources

Email deliverability issues can be prevented by adding domain verification records.

Security for Websites and Beyond

We live in an age of online insecurity. Malicious actors don’t take a minute off. Instead, they continue to wreak havoc.

Sure, we talk about web security quite a bit. And we try our best to build a virtual mote around websites. But websites are still being compromised.

We’re learning that security goes deeper than installing updates or tweaking .htaccess files. The fitness of a user’s device also plays a role.

Stolen session cookies are a prime example. Hackers can grab them off of a compromised device. A “bulletproof” website is no match for a phone with an info stealer installed. They can waltz right in and do whatever they want.

Understanding how device security impacts the web is crucial. It’s something that can benefit us and our clients. After all, a single weak link can break the chain.

Website Security Resources

Websites are under a constant threat from hackers.

Command Line Tools

Some of us cringe at the mere thought of using a command line tool. Hasn’t that stuff gone the way of the dinosaur?

Nothing could be further from the truth. Command line tools like WordPress CLI remain popular. Why is that? It’s all about power and efficiency.

The command line doesn’t have the overhead of a graphical user interface (GUI). Thus, it handles bulk operations faster. For example, you can perform a search-and-replace operation on a database more quickly.

You can also do a lot of behind-the-scenes work with your web server. The command line may be the only way to run specific tasks.

It’s worth brushing up on command-line operations. They are a huge time saver in the right circumstances.

Command Line Resources

Command line tools are still a popular way to perform tasks.

Become a More Well-Rounded Web Designer

The skills above are all adjacent to web design. And the need for this knowledge is growing.

Perhaps that has always been the case with SEO. Meanwhile, security and DNS seem to be just about mandatory these days.

Working with clients means you inevitably will face questions about these subjects. Freelancers and small agencies don’t always have an expert within reach. So, it’s up to us to find answers.

The command line is more about adding another tool to your toolbox. The improved efficiency will benefit you. And the result is better service for your clients.

Web designers tend to be specialists. We focus on the front-end or back-end. But the more we know, the more well-rounded we become.

It’s one way to stay on the cutting edge of the industry for years to come.

Related Topics


Top

How Google Outranks Medium.com Plagiarized Content Ahead of Original Content

For years, Google has emphasized to the webmaster community that original, high-quality content is key to ranking well in search results. Google’s systems aim to reward content that demonstrates E-E-A-T (expertise, experience, authoritativeness, and trustworthiness), regardless of how the content is produced. This focus on quality…

Q&A: A new initiative to help strengthen democracy

In the United States and around the world, democracy is under threat. Anti-democratic attitudes have become more prevalent, partisan polarization is growing, misinformation is omnipresent, and politicians and citizens sometimes question the integrity of elections. 

With this backdrop, the MIT Department of Political Science is launching an effort to establish a Strengthening Democracy Initiative. In this Q&A, department head David Singer, the Raphael Dorman-Helen Starbuck Professor of Political Science, discusses the goals and scope of the initiative.

Q: What is the purpose of the Strengthening Democracy Initiative?

A: Well-functioning democracies require accountable representatives, accurate and freely available information, equitable citizen voice and participation, free and fair elections, and an abiding respect for democratic institutions. It is unsettling for the political science community to see more and more evidence of democratic backsliding in Europe, Latin America, and even here in the U.S. While we cannot single-handedly stop the erosion of democratic norms and practices, we can focus our energies on understanding and explaining the root causes of the problem, and devising interventions to maintain the healthy functioning of democracies.

MIT political science has a history of generating important research on many facets of the democratic process, including voting behavior, election administration, information and misinformation, public opinion and political responsiveness, and lobbying. The goals of the Strengthening Democracy Initiative are to place these various research programs under one umbrella, to foster synergies among our various research projects and between political science and other disciplines, and to mark MIT as the country’s leading center for rigorous, evidence-based analysis of democratic resiliency.

Q: What is the initiative’s research focus?

A: The initiative is built upon three research pillars. One pillar is election science and administration. Democracy cannot function without well-run elections and, just as important, popular trust in those elections. Even within the U.S., let alone other countries, there is tremendous variation in the electoral process: whether and how people register to vote, whether they vote in person or by mail, how polling places are run, how votes are counted and validated, and how the results are communicated to citizens.

The MIT Election Data and Science Lab is already the country’s leading center for the collection and analysis of election-related data and dissemination of electoral best practices, and it is well positioned to increase the scale and scope of its activities.

The second pillar is public opinion, a rich area of study that includes experimental studies of public responses to misinformation and analyses of government responsiveness to mass attitudes. Our faculty employ survey and experimental methods to study a range of substantive areas, including taxation and health policy, state and local politics, and strategies for countering political rumors in the U.S. and abroad. Faculty research programs form the basis for this pillar, along with longstanding collaborations such as the Political Experiments Research Lab, an annual omnibus survey in which students and faculty can participate, and frequent conferences and seminars.

The third pillar is political participation, which includes the impact of the criminal justice system and other negative interactions with the state on voting, the creation of citizen assemblies, and the lobbying behavior of firms on Congressional legislation. Some of this research relies on machine learning and AI to cull and parse an enormous amount of data, giving researchers visibility into phenomena that were previously difficult to analyze. A related research area on political deliberation brings together computer science, AI, and the social sciences to analyze the dynamics of political discourse in online forums and the possible interventions that can attenuate political polarization and foster consensus.

The initiative’s flexible design will allow for new pillars to be added over time, including international and homeland security, strengthening democracies in different regions of the world, and tackling new challenges to democratic processes that we cannot see yet.

Q: Why is MIT well-suited to host this new initiative?

A: Many people view MIT as a STEM-focused, highly technical place. And indeed it is, but there is a tremendous amount of collaboration across and within schools at MIT — for example, between political science and the Schwarzman College of Computing and the Sloan School of Management, and between the social science fields and the schools of science and engineering. The Strengthening Democracy Initiative will benefit from these collaborations and create new bridges between political science and other fields. It’s also important to note that this is a nonpartisan research endeavor. The MIT political science department has a reputation for rigorous, data-driven approaches to the study of politics, and its position within the MIT ecosystem will help us to maintain a reputation as an “honest broker,” and to disseminate path-breaking, evidence-based research and interventions to help democracies become more resilient.

Q: Will the new initiative have an educational mission?

A: Of course! The department has a long history of bringing in scores of undergraduate researchers via MIT’s Undergraduate Research Opportunities Program. The initiative will be structured to provide these students with opportunities to study various facets of the democratic process, and for faculty to have a ready pool of talented students to assist with their projects. My hope is to provide students with the resources and opportunities to test their own theories by designing and implementing surveys in the U.S. and abroad, and use insights and tools from computer science, applied statistics, and other disciplines to study political phenomena. As the initiative grows, I expect more opportunities for students to collaborate with state and local officials on improvements to election administration, and to study new puzzles related to healthy democracies.

Postdoctoral researchers will also play a prominent role by advancing research across the initiative’s pillars, supervising undergraduate researchers, and handling some of the administrative aspects of the work.

Q: This sounds like a long-term endeavor. Do you expect this initiative to be permanent?

A: Yes. We already have the pieces in place to create a leading center for the study of healthy democracies (and how to make them healthier). But we need to build capacity, including resources for a pool of researchers to shift from one project to another, which will permit synergies between projects and foster new ones. A permanent initiative will also provide the infrastructure for faculty and students to respond swiftly to current events and new research findings — for example, by launching a nationwide survey experiment, or collecting new data on an aspect of the electoral process, or testing the impact of a new AI technology on political perceptions. As I like to tell our supporters, there are new challenges to healthy democracies that were not on our radar 10 years ago, and no doubt there will be others 10 years from now that we have not imagined. We need to be prepared to do the rigorous analysis on whatever challenges come our way. And MIT Political Science is the best place in the world to undertake this ambitious agenda in the long term.