Machine unlearning: Researchers make AI models ‘forget’ data

Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do…

CSS Properties to Make Hyperlinks More Attractive – Speckyboy

Hyperlinks don’t always get the attention they deserve from web designers. Sure, we might make a few tweaks. However, we don’t always go the extra mile to make them stand out.

Perhaps that’s because the styling options used to be limited. Link color and underlining were the primary options. Hover and focus states seemed to be where most of the creativity occurred. Other enhancements tended to require add-ons like JavaScript.

CSS has changed the game in recent years. Several properties are available to customize the look of hyperlinks. They also provide a higher level of control regarding details like spacing and sizing.

It’s a whole new world of possibilities. Let’s check out some CSS properties that help make hyperlinks more attractive.

A Default Link

We’ll start with a default link configuration. A link color and CSS states were added – but that’s it. It will serve as a baseline as we explore the CSS properties below.

See the Pen Link Styling:Plain by Eric Karkovack

It used to be that a link’s underline had to be the same color as its text. The text-decoration-color property allows us to choose a separate hue. It also works with overlines, strikethroughs, and anything else set by the text-decoration property.

We’ve added a brown underline to compliment our green link text.

See the Pen Link Styling:text-decoration-color by Eric Karkovack

This niche property determines how the link’s text decoration interacts with glyphs. The default setting is auto, where the browser interrupts underlines and overlines so they don’t touch a glyph. You’ll notice this with lowercase letters that go below the baseline.

Setting the property to none means the underline or overline draws a straight line – regardless of where glyphs are located.

See the Pen Link Styling:text-decoration-skip-link by Eric Karkovack

The thickness of a link’s underline typically follows what’s defined in the font-weight property. That is, bold text will result in a thicker underline. This property lets us set a consistent value for every link in the cascade.

See the Pen Link Styling:text-decoration-thickness by Eric Karkovack

Text decorations don’t have to be a straight line. This property lets you change the style to dashed, dotted, double, or wavy lines.

See the Pen Link Styling:text-decoration-style by Eric Karkovack

Here’s a way to specify how closely (or not) an underline is to the text above. Adding a few pixels of space between them can improve legibility.

Note that this property doesn’t impact instances of the HTML underline tag (<u>). It only affects instances where the text-decoration property is set.

See the Pen Link Styling:text-underline-offset by Eric Karkovack

Another niche property, text-underline-position specifies the position of the underline relative to its text. Setting it to under is ideal for mathematical and scientific formulas. It makes subscript characters easy to read – even when underlined.

See the Pen Link Styling:text-underline-under by Eric Karkovack

Going Further with Link Styles

Hyperlinks don’t have to be bland. There are now plenty of ways to make them as much a part of your brand as other design elements.

The properties above are all worth considering when styling links. They’re relatively simple to implement and can make links more attractive and accessible. Best of all, they’re native CSS and won’t impact page load performance.

You can also use them beyond default styles. Style them for various states, such as changing the link’s underline color during a hover event. In addition, there’s an opportunity to add animation and transitions to create all sorts of fun micro-interactions.

Just beware – it’s possible to take things a little too far. Always keep best practices in mind to enhance the user experience. Anything that makes links harder to read or use isn’t worth doing.

It’s time to get creative! Experiment with these CSS properties and see how you can bring a little life to your links.

Related Topics


Top

NDI: Revolutionizing IP-Based Video Production for Professional Workflows

In this article by Stephan Kexel for Riwit, we explore how NewTek’s Network Device Interface (NDI) is transforming the world of IP-based video production and why it’s becoming a must-have technology for modern video workflows.

What is NDI and How Has It Transformed Video Production?

In 2015, NewTek introduced the Network Device Interface (NDI), a revolutionary technology that has reshaped how video and audio signals are transmitted across networks. By utilizing standard IP networks, NDI eliminates the need for expensive, complex cabling, and hardware traditionally required for professional video production. NDI has enabled real-time, high-quality video and audio transmission over IP networks, allowing for smoother, more cost-efficient workflows across various sectors, including live production, broadcast, online streaming, and even educational video production.

Since its launch, NDI has evolved significantly. NewTek’s continued development of NDI has optimized it for a range of use cases, particularly in live production environments. It now supports H.264 and H.265 compression techniques, which allow for reduced bandwidth usage without compromising video quality. This makes NDI an adaptable solution for professionals needing to scale their operations without incurring high costs.

Key Benefits of NDI for Modern Video Productions

The flexibility and scalability of NDI make it a game-changer for video production professionals. Key benefits include:

  • Real-time video transmission over standard IP networks, eliminating the need for complex infrastructure.
  • High interoperability, as NDI is supported by a wide range of devices and software, seamlessly integrating into existing workflows.
  • Scalability for dynamic live production environments, allowing easy addition of new sources and receivers without complicated setups.
  • Cost-efficiency, thanks to the ability to run over existing networks, reducing the need for specialized hardware and cabling.

NDI and Atomos: A Perfect Partnership for IP-Based Video Production

The combination of NDI technology and Atomos devices, such as the Ninja and Shogun series, has taken professional video production to the next level. Atomos is a leading manufacturer of powerful monitor/recorder devices, and with NDI integration, it offers an enhanced, IP-based video workflow solution. The integration of NDI into Atomos devices allows video professionals to enjoy seamless, real-time video streaming over IP, making it easier to manage complex productions.

New AtomOS 11.11.00 Update: Enhancing NDI Features for Atomos Devices

The release of AtomOS 11.11.00 brings several key features that further enhance the performance of Atomos Ninja and Shogun devices in NDI workflows:

  • Support for H.265 codec: The update introduces H.265 codec support, optimizing bandwidth usage in networks with limited capacity. This results in higher-quality video at lower bitrates.
  • NDI Device Naming and Group Management: Users can now assign specific names to NDI devices and create device groups, improving organization and workflow in larger, more complex environments.
  • NDI Multicast: This feature allows video signals to be sent to multiple receivers simultaneously without consuming extra network resources, ideal for live events with multiple displays or control rooms.
  • Improved NDI Control Options: The Connect Tab in AtomOS has been upgraded to offer more intuitive controls for transmitting (Tx) and receiving (Rx) NDI signals, enhancing user experience and ease of management.

How NDI and Atomos are Redefining IP-Based Video Workflows

Together, NDI technology and Atomos’ Ninja and Shogun devices, combined with the latest AtomOS 11.11.00 update, are redefining the future of IP-based video production. With enhanced features like H.265 support, multicast, and improved NDI control, production teams can achieve maximum flexibility and efficiency in their workflows. This integration provides a cost-effective, high-performance solution for professional video production that meets the demands of today’s dynamic and fast-paced video production environments.

The future of video production is here, and NDI is at the forefront of this revolution, offering unparalleled benefits in cost, efficiency, and scalability for video professionals worldwide.

Read the full article by Stephan Kexel for Riwit HERE

Learn more about Atomos below:

YoloBox Ultra Changed My Livestreams!

In this YouTube video from LEOPAZZO TV, he goes over the YoloBox Ultra in great detail and how it changed his livestreams for the better! 

The Yolobox Ultra is redefining live streaming by combining advanced features with portability, making it the ultimate device for content creators and live event producers. Whether you’re streaming on YouTube, Facebook, Twitch, or another platform, this powerful device delivers professional-quality results with ease.

What Makes the Yolobox Ultra a Game-Changer?

The Yolobox Ultra is equipped with a 7-inch touchscreen, offering a user-friendly interface for seamless switching between multiple video sources. With support for up to six inputs, including three HDMI, one USB, and RTMP integration for remote streams, this device handles even the most complex streaming needs effortlessly. Its 4K streaming capabilities ensure your live broadcasts look sharp, vibrant, and professional.

Additionally, the Yolobox Ultra comes with built-in graphics and overlay features, allowing users to add custom titles, logos, and picture-in-picture effects directly from the device. This eliminates the need for extra software, streamlining your workflow while delivering a polished, studio-quality presentation.

Portability and Connectivity for On-the-Go Streaming

One of the standout features of the Yolobox Ultra is its portability. Designed for creators who are always on the move, the device supports Wi-Fi and 4G LTE connectivity, making it easy to live stream from virtually anywhere. Forget about bulky setups—this lightweight solution empowers you to broadcast live from events, remote locations, or even while traveling.

Professional Audio and Smooth Streaming Features

The Yolobox Ultra doesn’t just stop at video; it also ensures high-quality sound. With built-in audio mixing capabilities and adjustable input controls, integrating professional-grade audio into your streams is effortless.

To top it off, the device includes innovative software tools like adaptive bitrate technology, which automatically adjusts to fluctuating network conditions. This ensures a smooth and uninterrupted streaming experience, even in less-than-ideal environments.

Why Choose the Yolobox Ultra for Live Streaming?

  • All-in-one design: Simplifies your live production setup.
  • 4K streaming: Guarantees high-quality visuals.
  • Custom overlays: Add a professional touch with titles, logos, and effects.
  • Portable and versatile: Stream from anywhere with Wi-Fi or 4G LTE.
  • Seamless audio integration: Built-in controls for professional sound.
  • Network adaptability: Smooth streaming even on unstable connections.

Whether you’re a content creator, live event producer, or someone looking to enhance their live broadcasts, the Yolobox Ultra offers an unparalleled combination of power, convenience, and versatility.

[embedded content]

How Single Tokens Can Make or Break AI Reasoning

Imagine asking an AI to solve a simple math problem about paying back a loan. When the AI encounters the word “owed,” it stumbles, producing incorrect calculations and faulty logic. But change that single word to “paid,” and suddenly the AI’s reasoning transforms – becoming clear,…

Enabling AI to explain its predictions in plain language

Machine-learning models can make mistakes and be difficult to use, so scientists have developed explanation methods to help users understand when and how they should trust a model’s predictions.

These explanations are often complex, however, perhaps containing information about hundreds of model features. And they are sometimes presented as multifaceted visualizations that can be difficult for users who lack machine-learning expertise to fully comprehend.

To help people make sense of AI explanations, MIT researchers used large language models (LLMs) to transform plot-based explanations into plain language.

They developed a two-part system that converts a machine-learning explanation into a paragraph of human-readable text and then automatically evaluates the quality of the narrative, so an end-user knows whether to trust it.

By prompting the system with a few example explanations, the researchers can customize its narrative descriptions to meet the preferences of users or the requirements of specific applications.

In the long run, the researchers hope to build upon this technique by enabling users to ask a model follow-up questions about how it came up with predictions in real-world settings.

“Our goal with this research was to take the first step toward allowing users to have full-blown conversations with machine-learning models about the reasons they made certain predictions, so they can make better decisions about whether to listen to the model,” says Alexandra Zytek, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

She is joined on the paper by Sara Pido, an MIT postdoc; Sarah Alnegheimish, an EECS graduate student; Laure Berti-Équille, a research director at the French National Research Institute for Sustainable Development; and senior author Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems. The research will be presented at the IEEE Big Data Conference.

Elucidating explanations

The researchers focused on a popular type of machine-learning explanation called SHAP. In a SHAP explanation, a value is assigned to every feature the model uses to make a prediction. For instance, if a model predicts house prices, one feature might be the location of the house. Location would be assigned a positive or negative value that represents how much that feature modified the model’s overall prediction.

Often, SHAP explanations are presented as bar plots that show which features are most or least important. But for a model with more than 100 features, that bar plot quickly becomes unwieldy.

“As researchers, we have to make a lot of choices about what we are going to present visually. If we choose to show only the top 10, people might wonder what happened to another feature that isn’t in the plot. Using natural language unburdens us from having to make those choices,” Veeramachaneni says.

However, rather than utilizing a large language model to generate an explanation in natural language, the researchers use the LLM to transform an existing SHAP explanation into a readable narrative.

By only having the LLM handle the natural language part of the process, it limits the opportunity to introduce inaccuracies into the explanation, Zytek explains.

Their system, called EXPLINGO, is divided into two pieces that work together.

The first component, called NARRATOR, uses an LLM to create narrative descriptions of SHAP explanations that meet user preferences. By initially feeding NARRATOR three to five written examples of narrative explanations, the LLM will mimic that style when generating text.

“Rather than having the user try to define what type of explanation they are looking for, it is easier to just have them write what they want to see,” says Zytek.

This allows NARRATOR to be easily customized for new use cases by showing it a different set of manually written examples.

After NARRATOR creates a plain-language explanation, the second component, GRADER, uses an LLM to rate the narrative on four metrics: conciseness, accuracy, completeness, and fluency. GRADER automatically prompts the LLM with the text from NARRATOR and the SHAP explanation it describes.

“We find that, even when an LLM makes a mistake doing a task, it often won’t make a mistake when checking or validating that task,” she says.

Users can also customize GRADER to give different weights to each metric.

“You could imagine, in a high-stakes case, weighting accuracy and completeness much higher than fluency, for example,” she adds.

Analyzing narratives

For Zytek and her colleagues, one of the biggest challenges was adjusting the LLM so it generated natural-sounding narratives. The more guidelines they added to control style, the more likely the LLM would introduce errors into the explanation.

“A lot of prompt tuning went into finding and fixing each mistake one at a time,” she says.

To test their system, the researchers took nine machine-learning datasets with explanations and had different users write narratives for each dataset. This allowed them to evaluate the ability of NARRATOR to mimic unique styles. They used GRADER to score each narrative explanation on all four metrics.

In the end, the researchers found that their system could generate high-quality narrative explanations and effectively mimic different writing styles.

Their results show that providing a few manually written example explanations greatly improves the narrative style. However, those examples must be written carefully — including comparative words, like “larger,” can cause GRADER to mark accurate explanations as incorrect.

Building on these results, the researchers want to explore techniques that could help their system better handle comparative words. They also want to expand EXPLINGO by adding rationalization to the explanations.

In the long run, they hope to use this work as a stepping stone toward an interactive system where the user can ask a model follow-up questions about an explanation.

“That would help with decision-making in a lot of ways. If people disagree with a model’s prediction, we want them to be able to quickly figure out if their intuition is correct, or if the model’s intuition is correct, and where that difference is coming from,” Zytek says.

AI’s Growing Appetite for Power: Are Data Centers Ready to Keep Up?

As artificial intelligence (AI) races forward, its energy demands are straining data centers to the breaking point. Next-gen AI technologies like generative AI (genAI) aren’t just transforming industries—their energy consumption is affecting nearly every data server component—from CPUs and memory to accelerators and networking. GenAI applications,…