Launched in 2021, GitHub Copilot has become a helpful tool for developers. It is an AI code generator that suggests code snippets and autocompletes lines. Since its launch, Copilot has dramatically improved developers’ productivity and code quality. GitHub Copilot has been involved in a legal case…
Unlocking Canon’s Latest PTZ Camera Firmware with All New Features – Videoguys
On This Week’s Videoguys Live, James is introducing the new PTZ firmware updates from Canon. This update brings free auto-tracking available to all Canon PTZ Cameras as well as other great new features.
Watch the full video below:
[embedded content]
Canon PTZ Camera Family
New Firmware Updates
- FREE Auto-tracking Now Included with all Cameras
- Improved Color Enhancements & Low Light Capabilities
- Improved Camera Control Experience
- Plus new features & capabilities added to the Canon RC-IP1000 controller
FREE Auto-Tracking Included Now on all PTZ Camera Models
- The free version has limited options available, but still retains excellent tracking performance
- Users who need detailed tracking configurations and tuning should purchase the full Auto Tracking license
Free Auto Tracking Feature Summary
- Subject Auto Detect
- 2 Subject Display Sizes (2&4 only)
- Subject Loss Action
- Prohibited Tracking Area
Fixed Values
- Subject will always be in the middle
- Auto Zoom is always on
- Sensitivity is Level 9 (out of 10)
- Tracking start time is 0 seconds (right away)
- Tracking Selection Area is full screen
- Tracking will always resume after manual operation
Auto Tracking Subject Improvement
- When multiple people are in the camera’s view, it is possible to lose tracking if the subject’s face (head-CR-N700) being tracked is temporarily blocked by another person. Auto Tracking is improved to better handle temporary loss of a subject face/head.
Smoother Auto Tracking Start and Stop
- Improvements have been made to allow a smoother start and stop.
Support for Prohibited Tracking Areas
- Added ability to specify an area to stop tracking. Once the tracked target enters this area, the tracking tops. Can be useful for situations such as presentations where multiple speakers will stand to present, and then sit afterwards.
Improved Subject Composition Adjustment
- Currently, small changes to the silhouette would not trigger any camera movement.
- New firmware algorithm will ensure camera movement even with very small silhouette adjustments
More Exciting New Features
- USB Webcam – UVC Output Improvements
- Pan / Tilt Operation Range Limiter
- Enhanced Focus / Zoom Speed Resolution
- S/N Priority – Improved Strong Noise Reduction
USB Webcam – UVC Output Improvements
- The UVC (USB Video Class) output performance of the CR-N100 and CR-N300 will be improved, and additional output in YUV format will be supported
Pan / Tilt Operation Range Limiter
- Specify a Pan / Tilt movement limit to protect against moving the camera too far off subject
Enhanced Focus / Zoom Speed Resolution
- The current 16 zoom speed steps (24 steps for the CR-N700) will be expanded to 128 steps, and the current 3 focus speed steps will be expanded to 64 steps.
S/N Priority – Improved Strong Noise Reduction
- For 1/2.3” Sensor cameras (CR-N100, CR-N300, CR-X300), there was noticeable noise when shooting in lower light environments due to the automatic noise reduction not being strong enough, which is normally addressed by manually assigning a stronger noise reduction setting. The new SN Priority Mode will make it easier to apply strong noise reduction.
SMPTE Color Bars
- Available on CR-N700 and CR-X300
- Now Available on CR-N100, 300 and 500!
Increased available Clear Scan Resolutions
- On the CR-N700, available clear scan resolution will be increased from 256 to 768 available resolutions, to match frequencies available in Cinema/Pro Video products.
RC-IP1000 Updates
- Support for outputting 2×2 and 3×3 multi-display IP preview screens
- Touch Preset Thumbnail Images
- Waveform Monitor and Vector scope Display
- Auto Tracking Controls
- Copy Camera Settings
Bungie Lays Off Over 200 Employees, Announces Plans For Deeper Integration With Sony
Bungie has announced layoffs affecting 220 employees. In a blog post, Bungie CEO Pete Parsons cites “rising costs of development and industry shifts as well as enduring economic conditions” as the primary factor while revealing some dramatic changes for the company going forward.
These layoffs represent 17 percent of the studio’s workforce and affect every department of the company, with executive and senior leadership roles impacted most. Parsons states that departing employees will receive a “generous” exit package that includes severance, bonus, and health coverage. Bungie also plans to hold employee town halls, along with team and private individual meetings over the coming weeks, to help sort out the next steps. 850 employees remain following the layoffs.
“I realize all of this is hard news, especially following the success we have seen with The Final Shape,” Parsons writes. “But as we’ve navigated the broader economic realities over the last year, and after exhausting all other mitigation options, this has become a necessary decision to refocus our studio and our business with more realistic goals and viable financials.”
Parsons also reveals plans to further integrate Bungie into Sony Interactive Entertainment (SIE), which acquired the studio in 2022, to leverage its strengths. Firstly, Bungie is working to integrate 155 roles (12 percent of its staff) into SIE over the next few quarters. Bungie states this has allowed it to save additional talent that would have otherwise been affected by today’s layoffs.
Second, Bungie is working with PlayStation Studios to form a new, separate in-house studio that will continue developing one of its incubation projects. Bungie describes this title as, “an action game set in a brand-new science-fantasy universe.”
Parsons then elaborates on how Bungie found itself in this difficult position. He explains that the team’s goal was to ship games in “three enduring, global franchises” and set up several incubation projects to achieve this aim. However, Bungie found itself stretched thin too quickly. This forced its support structures to grow larger than it could feasibly support, especially given the ongoing development of two big titles in Destiny 2 and the upcoming Marathon.
Destiny 2: The Final Shape
Parsons also cites this rapid expansion collided with a broader economic slowdown, the sharp downturn games industry, the mixed reception to Destiny 2: Lightfall, and the need to give the recently released The Final Shape expansion for Destiny 2 (which garnered critical acclaim) and Marathon more development time to ensure a high quality. “We were overly ambitious, our financial safety margins were subsequently exceeded, and we began running in the red,” Parsons states.
“After this new trajectory became clear, we knew we had to change our course and speed, and we did everything we could to avoid today’s outcome,” Parsons writes. “Even with exhaustive efforts undertaken across our leadership and product teams to resolve our financial challenges, these steps were simply not enough.”
Today’s layoffs come roughly eight months after the studio cut 100 staffers last October, and the second since it was acquired by Sony. It represents another wave of game industry job cuts that have run rampant since last year, and hopefully the affected staff can land on their feet sooner than later.
Balancing innovation and trust: Experts assess the EU’s AI Act
As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption. Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could…
Apple opts for Google chips in AI infrastructure, sidestepping Nvidia
In a report published on Monday, it was disclosed that Apple sidestepped industry leader Nvidia in favour of chips designed by Google. Instead of employing Nvidia’s GPUs for its artificial intelligence software infrastructure, Apple will use Google chips as the cornerstone of AI-related features and tools…
UAE blocks US congressional meetings with G42 amid AI transfer concerns
There have been reports that the United Arab Emirates (UAE) has “suddenly cancelled” the ongoing series of meetings between a group of US congressional staffers and Emirati AI firm G42, after some US lawmakers raised concerns that this practice may lead to the transfer of advanced…
Helping Olympic athletes optimize their performance, one stride at a time
The Olympics is all about pushing the frontiers of human performance. As some athletes prepared for the Paris 2024 games, that included using a new technology developed at MIT.nano.
The technology was created by Striv (pronounced “strive”), a startup whose founder gained access to the cutting-edge labs and fabrication equipment at MIT.nano as part of the START.nano accelerator program. Striv’s tactile sensing technology fits into the inserts of shoes and, when combined with algorithms that crunch that tactile data, can precisely track force, movement, and form. Runners including USA marathoner Clayton Young, Jamaican track and field Olympian Damar Forbes, and former Olympic marathoner Jake Riley have tried Striv’s device.
“I’m excited about the potential of Striv’s technology,” Riley says. “It’s on a good path to revolutionize how we train and prevent injuries. After testing the sensors and seeing the data firsthand, I’m convinced of its value.”
For Striv founder Axl Chen, the 2024 games are the perfect opportunity to show that the product can help athletes at the highest level. But Chen also believes their product can help many non-Olympians.
“We think the Paris 2024 Olympics will be a really interesting opportunity for us to test the product with the athletes training for it,” Chen says. “After that, we’ll offer this to the general public to help everyone get the same kind of support and coaching advice as professional athletes.”
Putting yourself in someone else’s shoes
Chen was working in a robotics lab at Tsinghua University in China when he began using tactile sensors. Over the next two years, he experimented with ways to make the sensors more flexible and cost-effective.
“I think a lot of people have already explored vision and language, but tactile sensing as a way of perceiving the world seemed more open to me,” Chen says. “I thought tactile sensors and AI could make for powerful new products.”
The first space Striv entered was virtual reality (VR) gaming. The company created a shoe with embedded sensors that could capture users’ body motions in real-time by combining the sensor data with regular VR hand controllers. Striv even sold about 300 pairs of its shoes to interested customers around the world.
Striv has also gotten interest from companies in the medical, robotics, and automotive fields, which was both a blessing and a curse because of the need for startups to focus on one specific customer early on.
Chen says getting into the START.nano program in 2023 was an inflection point for the company.
“I pretty much didn’t apply to anything else,” Chen says. “I’m really interested in this technology, and I knew if I could do research at MIT, it would be really helpful to push this technology forward.”
Since then, Chen has leveraged MIT’s advanced nanofabrication equipment, laboratories, and expertise to iterate on different designs and build prototypes. That has included working in MIT.nano’s Immersion Lab, which features precise motion capture devices and other sensing technologies, like VO2 intake measurements and details force analysis of runners’ steps on a treadmill.
Striv’s team has also received support from the MIT Venture Mentoring Service (VMS) and is part of the MIT Industrial Liaison Program’s Startup Exchange program, which has helped the team hone in on athletes as the beachhead market for their technology.
“It’s remarkable that MIT is supporting us so much,” Chen says. “We often get asked why they’re doing this [for non-students], and we say MIT is committed to pushing technology forward.”
Striv’s sensing solution is made up of two layers of flexible electrodes with a material in between that can create different electrical characteristics corresponding to the force it comes under. That material has been at the heart of Chen’s research at MIT.nano: He’s trying to make it more durable and precise by adding nanostructures and making other tweaks.
Striv is also developing AI algorithms that use the sensor data to infer full body motion.
“We can quantify the force they apply to the ground and the efficiency of their movements,” Chen explains. “We can see if they’re leaning too far forward, or their knees are too high. That can be really useful in determining if they’re improving or not.”
Technology for the masses
As soon as Chen began interviewing runners, he knew Striv could help them.
Image: Courtesy of Striv
Previous item
“The alternatives for athletes are either to go to a really expensive biomechanics lab or use a wearable that’s able to track your heart rate but doesn’t give insights into your performance,” Chen explains. “For example, if you’re running, how is your form? How can you improve it? Runners are really interested in their form. They care about how high their knees go, how high they’re jumping, how much force they’re putting into the ground.”
Striv has tested its product with around 50 professional athletes to date and worked with Young in the leadup to the Olympics. Chen also has an eye on helping more casual runners.
“We also want to bring this to serious runners that aren’t professional,” Chen says. “I know a lot of people in Boston who run every day. That’s where this will go next.”
As the company grows and collects more data, Chen believes Striv will be able to provide personalized plans for improving performance and avoiding injuries across a range of different activities.
“We talk to a lot of coaches, and we think there’s potential to bring this to a lot of different sports,” Chen says. “Golfers, hikers, tennis players, cyclists, ski and snowboarders. We think this could be really useful for all of them.”
Method prevents an AI model from being overconfident about wrong answers
People use large language models for a huge array of tasks, from translating an article to identifying financial fraud. However, despite the incredible capabilities and versatility of these models, they sometimes generate inaccurate responses.
On top of that problem, the models can be overconfident about wrong answers or underconfident about correct ones, making it tough for a user to know when a model can be trusted.
Researchers typically calibrate a machine-learning model to ensure its level of confidence lines up with its accuracy. A well-calibrated model should have less confidence about an incorrect prediction, and vice-versa. But because large language models (LLMs) can be applied to a seemingly endless collection of diverse tasks, traditional calibration methods are ineffective.
Now, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a calibration method tailored to large language models. Their method, called Thermometer, involves building a smaller, auxiliary model that runs on top of a large language model to calibrate it.
Thermometer is more efficient than other approaches — requiring less power-hungry computation — while preserving the accuracy of the model and enabling it to produce better-calibrated responses on tasks it has not seen before.
By enabling efficient calibration of an LLM for a variety of tasks, Thermometer could help users pinpoint situations where a model is overconfident about false predictions, ultimately preventing them from deploying that model in a situation where it may fail.
“With Thermometer, we want to provide the user with a clear signal to tell them whether a model’s response is accurate or inaccurate, in a way that reflects the model’s uncertainty, so they know if that model is reliable,” says Maohao Shen, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on Thermometer.
Shen is joined on the paper by Gregory Wornell, the Sumitomo Professor of Engineering who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory for Electronics, and is a member of the MIT-IBM Watson AI Lab; senior author Soumya Ghosh, a research staff member in the MIT-IBM Watson AI Lab; as well as others at MIT and the MIT-IBM Watson AI Lab. The research was recently presented at the International Conference on Machine Learning.
Universal calibration
Since traditional machine-learning models are typically designed to perform a single task, calibrating them usually involves one task-specific method. On the other hand, since LLMs have the flexibility to perform many tasks, using a traditional method to calibrate that model for one task might hurt its performance on another task.
Calibrating an LLM often involves sampling from the model multiple times to obtain different predictions and then aggregating these predictions to obtain better-calibrated confidence. However, because these models have billions of parameters, the computational costs of such approaches rapidly add up.
“In a sense, large language models are universal because they can handle various tasks. So, we need a universal calibration method that can also handle many different tasks,” says Shen.
With Thermometer, the researchers developed a versatile technique that leverages a classical calibration method called temperature scaling to efficiently calibrate an LLM for a new task.
In this context, a “temperature” is a scaling parameter used to adjust a model’s confidence to be aligned with its prediction accuracy. Traditionally, one determines the right temperature using a labeled validation dataset of task-specific examples.
Since LLMs are often applied to new tasks, labeled datasets can be nearly impossible to acquire. For instance, a user who wants to deploy an LLM to answer customer questions about a new product likely does not have a dataset containing such questions and answers.
Instead of using a labeled dataset, the researchers train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed to calibrate it for this new task.
They use labeled datasets of a few representative tasks to train the Thermometer model, but then once it has been trained, it can generalize to new tasks in a similar category without the need for additional labeled data.
A Thermometer model trained on a collection of multiple-choice question datasets, perhaps including one with algebra questions and one with medical questions, could be used to calibrate an LLM that will answer questions about geometry or biology, for instance.
“The aspirational goal is for it to work on any task, but we are not quite there yet,” Ghosh says.
The Thermometer model only needs to access a small part of the LLM’s inner workings to predict the right temperature that will calibrate its prediction for data points of a specific task.
An efficient approach
Importantly, the technique does not require multiple training runs and only slightly slows the LLM. Plus, since temperature scaling does not alter a model’s predictions, Thermometer preserves its accuracy.
When they compared Thermometer to several baselines on multiple tasks, it consistently produced better-calibrated uncertainty measures while requiring much less computation.
“As long as we train a Thermometer model on a sufficiently large number of tasks, it should be able to generalize well across any new task, just like a large language model, it is also a universal model,” Shen adds.
The researchers also found that if they train a Thermometer model for a smaller LLM, it can be directly applied to calibrate a larger LLM within the same family.
In the future, they want to adapt Thermometer for more complex text-generation tasks and apply the technique to even larger LLMs. The researchers also hope to quantify the diversity and number of labeled datasets one would need to train a Thermometer model so it can generalize to a new task.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
Canva Expands AI Capabilities with Acquisition of Leonardo.ai
Canva has announced its acquisition of Leonardo.ai, an Australian generative AI startup. This strategic purchase positions Canva to compete more aggressively in the rapidly evolving market for AI-enhanced design platforms. The acquisition of Leonardo.ai represents a major step forward for Canva in its quest to build…
Testing AI Tools? Don’t Forget to Think About the Total Cost.
In 2023, AI quickly moved from a novel and futuristic idea to a core component of enterprise strategies everywhere. While ChatGPT is one of the most popular shadow IT software applications, IT leaders are already working to formally adopt AI tools. While the average use of…