10 Best VPS Hosting Providers

If you’re looking to upgrade from your shared hosting plan to a VPS (Virtual Private Server) package, chances are your website is starting to do traffic numbers that exceed the capacity of your current plan. So, first, congrats! Now, it’s time to choose a VPS provider…

7 Best AI Online Course Creation Platforms (June 2024)

AI-powered course creation platforms are gaining a lot of traction in the e-learning industry by streamlining the process of designing, developing, and publishing engaging educational content. These intelligent tools assist educators, entrepreneurs, and businesses in crafting immersive learning experiences that cater to diverse audiences. In this…

A data-driven approach to making better choices

A data-driven approach to making better choices

Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.

Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.

Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.

Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.

The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.

“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.

There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.

Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights so they compose well within machine learning tools, and to understand areas and topics where the integration of behavioral economics and algorithmic tools might be most fruitful.

Students also produce ideas, develop associated research, and see the bigger picture. They’re led to understand where an insight fits and see where the broader research agenda is leading. Participants can think critically about what supervised LLMs can (and cannot) do, to understand how to integrate those capacities with the models and insights of behavioral economics, and to recognize the most fruitful areas for the application of what investigations uncover.

The dangers of subjectivity and bias

According to Rambachan, behavioral economics acknowledges that biases and mistakes exist throughout our choices, even absent algorithms. “The data used by our algorithms exist outside computer science and machine learning, and instead are often produced by people,” he continues. “Understanding behavioral economics is therefore essential to understanding the effects of algorithms and how to better build them.”

Rambachan sought to make the course accessible regardless of attendees’ academic backgrounds. The class included advanced degree students from a variety of disciplines.

By offering students a cross-disciplinary, data-driven approach to investigating and discovering ways in which algorithms might improve problem-solving and decision-making, Rambachan hopes to build a foundation on which to redesign existing systems of jurisprudence, health care, consumer lending, and industry, to name a few areas.

“Understanding how data are generated can help us understand bias,” Rambachan says. “We can ask questions about producing a better outcome than what currently exists.”

Useful tools for re-imagining social operations

Economics doctoral student Jimmy Lin was skeptical about the claims Rambachan and Mullainathan made when the class began, but changed his mind as the course continued.

“Ashesh and Sendhil started with two provocative claims: The future of behavioral science research will not exist without AI, and the future of AI research will not exist without behavioral science,” Lin says. “Over the course of the semester, they deepened my understanding of both fields and walked us through numerous examples of how economics informed AI research and vice versa.”

Lin, who’d previously done research in computational biology, praised the instructors’ emphasis on the importance of a “producer mindset,” thinking about the next decade of research rather than the previous decade. “That’s especially important in an area as interdisciplinary and fast-moving as the intersection of AI and economics — there isn’t an old established literature, so you’re forced to ask new questions, invent new methods, and create new bridges,” he says.

The speed of change to which Lin alludes is a draw for him, too. “We’re seeing black-box AI methods facilitate breakthroughs in math, biology, physics, and other scientific disciplines,” Lin  says. “AI can change the way we approach intellectual discovery as researchers.”

An interdisciplinary future for economics and social systems

Studying traditional economic tools and enhancing their value with AI may yield game-changing shifts in how institutions and organizations teach and empower leaders to make choices.

“We’re learning to track shifts, to adjust frameworks and better understand how to deploy tools in service of a common language,” Rambachan says. “We must continually interrogate the intersection of human judgment, algorithms, AI, machine learning, and LLMs.”

Lin enthusiastically recommended the course regardless of students’ backgrounds. “Anyone broadly interested in algorithms in society, applications of AI across academic disciplines, or AI as a paradigm for scientific discovery should take this class,” he says. “Every lecture felt like a goldmine of perspectives on research, novel application areas, and inspiration on how to produce new, exciting ideas.”

The course, Rambachan says, argues that better-built algorithms can improve decision-making across disciplines. “By building connections between economics, computer science, and machine learning, perhaps we can automate the best of human choices to improve outcomes while minimizing or eliminating the worst,” he says.

Lin remains excited about the course’s as-yet unexplored possibilities. “It’s a class that makes you excited about the future of research and your own role in it,” he says.

Paying it forward

Paying it forward

MIT professors Erik Lin-Greenberg and Tracy Slatyer truly understand the positive impact that advisors have in the life of a graduate student. Two of the most recent faculty members to be named “Committed to Caring,” they attribute their excellence in advising to the challenging experiences and life-changing mentorship they received during their own graduate school journeys.

Tracy Slatyer: Seeing the PhD as a journey

Tracy Slatyer is a professor in the Department of Physics who works on particle physics, cosmology, and astrophysics. Focused on unraveling the mysteries of dark matter, Slatyer investigates potential new physics through the analysis of astrophysical and cosmological data, exploring scenarios involving novel forces and theoretical predictions for photon signals.

One of Slatyer’s key approaches is to prioritize students’ educational journeys over academic accomplishments alone, also acknowledging the prevalence of imposter syndrome.

Having struggled in graduate coursework themselves, Slatyer shares their personal past challenges and encourages students to see the big picture: “I try to remind [students] that the PhD is a marathon, not a sprint, and that once you have your PhD, nobody will care if it took you one year or three to get through all the qualifying exams and required classes.” Many students also expressed gratitude for how  Slatyer offered opportunities to connect outside of work, including invitations to tea-time.

One of Slatyer’s key beliefs is the need for community amongst students, postdocs, and professors. Slatyer encourages students to meet with professors outside of their primary field of interest and helps advisees explore far-ranging topics. They note the importance of connecting with individuals at different career stages, often inviting students to conferences at other institutions, and hosting visiting scientists.

Advisees noted Slatyer’s realistic portrayal of expectations within the field and open discussion of work-life balance. They maintain a document with clear advising guidelines, such as placing new students on projects with experienced researchers. Slatyer also schedules weekly meetings to discuss non-research topics, including career goals and upcoming talks.

In addition, Slatyer does not shy away from the fact that their field is competitive and demanding. They are honest about their experiences in academia, noting that networking may be just as important as academic performance for a successful career.

Erik Lin-Greenberg: Empathy and enduring support

Erik Lin-Greenberg is an assistant professor in the history and culture of science and technology in the Department of Political Science. His research examines how emerging military technology affects conflict dynamics and the use of force.

Lin-Greenberg’s thoughtful supervision of his students underlies his commitment to cultivating the next generation of researchers. Students are grateful for his knack for identifying weak arguments, as well as his guidance through challenging publication processes: “For my dissertation, Erik has mastered the difficult art of giving feedback in a way that does not discourage.”

Lin-Greenberg’s personalized approach is further evidence of his exceptional teaching. In the classroom, students praise his thorough preparation, ability to facilitate rich discussions, and flexibility during high-pressure periods. In addition, his unique ability to break down complex material makes topics accessible to the diverse array of backgrounds in the classroom.

His mentorship extends far beyond academics, encompassing a genuine concern for the well-being of his students through providing personal check-ins and unwavering support.

Much of this empathy comes from Erik’s own tumultuous beginnings in graduate school at Columbia University, where he struggled to keep up with coursework and seriously considered leaving the program. He points to the care and dedication of mentors, and advisor Tonya Putnam in particular, as having an enormous impact.

“She consistently reassured me that I was doing interesting work, gave amazing feedback on my research, and was always open and transparent,” he recounts. “When I’m advising today, I constantly try to live up to Tonya’s example.”

In his own group, Erik chooses creative approaches to mentorship, including taking mentees out for refreshments to navigate difficult dissertation discussions. In his students’ moments of despair, he boosts their mood with photos of his cat, Major General Lansdale.

Ultimately, one nominator credited his ability to continue his PhD to Lin-Greenberg’s uplifting spirit and endless encouragement: “I cannot imagine anyone more deserving of recognition than Erik Lin-Greenberg.”

Data breach litigation, the new cyber battleground. Are you prepared? – CyberTalk

Data breach litigation, the new cyber battleground. Are you prepared? – CyberTalk

By Deryck Mitchelson, EMEA Field Chief Information Security Officer, Check Point Software Technologies.

Nearly everyone trusts Google to keep information secure. You trust Google with your email. I use Google for my personal email. Yet, for three years – from 2015 to 2018 – a single vulnerability in the Google Plus platform resulted in the third-party exposure of millions of pieces of consumer data.

Google paid a settlement of $350M in a corresponding shareholder lawsuit, but most organizations cannot afford millions in settlements. For most organizations, this level of expenditure due to a breach is unthinkable. And even for larger organizations with financial means, constant cycles of breach-related lawsuits are unsustainable.

Yet, across the next few years, especially as organizations continue to place data into the cloud, organizations are likely to see a significant uptick in post-breach litigation, including litigation against CISOs, unless they adopt stronger cyber security protocols.

Litigation looms large

Organizations that have experienced data breaches are battling a disturbing number of lawsuits. In particular, privacy-related class actions against healthcare providers are taking off.

Globally, there were 2X the number of data breach victims in 2023 as compared to 2022.

In 2023 alone, breach related class actions and government enforcement suits resulted in over $50 billion in settlement expenditures.

The Irish Health Service Executive, HSE, was severely impacted by a large cyber attack in 2021 with 80% of its IT services encrypted and 700 GB of unencrypted data exfiltrated, including protected health information. The HSE subsequently wrote to 90,936 affected individuals. It has been reported that the HSE is facing 473 data-protection lawsuits, and this number is expected to continue rising.

I recently spoke with a lawyer who specializes in data breach litigation. Anecdotally, she mentioned that breach-related lawsuits have grown by around 10X in the last year. This is becoming the new normal after a breach.

While organizations do win some of these lawsuits, courts have become increasingly sympathetic to plaintiffs, as data breaches can result in human suffering and hardship in the forms of psychological distress, identity theft, financial fraud and extortion. They can also result in loss of human life, but more about that later.

In courts of justice, an organization can no longer plead ‘we made an error or were unaware’, assuming that such a line will suffice. The World Economic Forum has found that 95% of cyber security threats can, in some capacity, be traced to human error. These cases are not complex. But the level of litigation shows that businesses are still making avoidable missteps.

To that effect, businesses need to not only start thinking about data protection differently, but also need to start operating differently.

Personal (and criminal) liability for CISOs

CISOs can be held personally liable, should they be found to have failed in adequately safeguarding systems and data that should be protected. At the moment, we’re not seeing much in the way of criminal liability for CISOs. However, if CISOs appear to have obfuscated the timeline of events, or if there isn’t full transparency with boards on levels of cyber risk, courts will indeed pursue a detailed investigation of a CISO’s actions.

The patch that would have fixed a “known critical vulnerability” should have been applied immediately. If the organization hadn’t delayed, would it still have been breached?

Therefore, it is in CISOs’ best interest to record everything – every interaction, every time that they meet with the board, and every time that they’re writing a document (who said what information, what the feedback was, who has read it, what the asks are), as a proactive breach preparedness measure.

If a CISO ends up in litigation, he or she needs to be able to say ‘this risk was fully understood by the board’. CISOs will not be able to argue “well, the board didn’t understand the level of risk” or “this was too complex to convey to the board”, it is the CISOs job to ensure cyber risk is fully understood.

We’re starting to see a trend where CISOs are leaving organizations on the back of large breaches, which may mean that they knew their charter, but failed to take full responsibility and accountability for the organization’s entire cyber security program.

The consumer perspective

As a consumer, I would expect CISOs to know what their job is – to understand the attack surface and to map out where they have weaknesses and vulnerabilities. And to have a program in-place in order to mitigate against as much.

But even if CISOs have a program in place to mitigate breaches, consumers can still come after them for a class action. Consumers can still argue that cyber security staff should have and could have moved faster. That they should have attempted to obtain additional investment funding from the board in order to remediate problems efficiently or to increase their operational capacity and capability to prevent the data breach.

The challenge that CISOs have got is that they’re trying to balance funding acquisition, the pace of change, innovation, and competitive advantage against actually ensuring that all security endeavors are done correctly.

A current case-study in liability

In Scottland, the National Health System of Dumfries and Gallloway recently experienced a serious data breach. The attack led to the exposure of a huge volume of Personally Identifiable Information (PII). Reports indicate that three TB of sensitive data may be been stolen. As means of proof, the cyber criminals sent screenshots of stolen medical records to the healthcare service.

As expected, a ransom demand was not paid. The criminals have now leaked a large volume of data online. Having previously worked in NHS Scotland, I find such criminal activity, targeting sensitive healthcare information, deplorable. Will we now, similar to HSE, see already constrained taxpayers’ money being used to defend lawsuits?

Liability leverage with proper tooling

CISOs cannot simply put in tooling if it can’t stand up to scrutiny. If CISOs are looking at tooling, but less-so at the effectiveness/efficacy of that tooling, then they should recognize that the probability of facing litigation is, arguably, fairly high. Just because tooling functions doesn’t mean that it’s fit for purpose.

In regards to tooling, CISOs should ask themselves ‘is this tool doing what it was advertised as capable of?’ ‘Is this delivering the right level of preventative security for the organization?’

Boards should also demand a certain level of security. They should be asking of CISOs, ‘Is the efficacy of what you’ve implemented delivering at the expected level, or is it not?’ and ‘Would our security have prevented a similar attack?’ We don’t see enough senior conversation around that. A lot of organizations fail to think in terms of, ‘We’ve got a solution in-place, but is it actually performing?’

CISOs need to approach data the same way that banks approach financial value. Banks place the absolute best safeguards around bank accounts, investments, stocks and money. CISOs need to do the same with all data.

Third-party risk

One of the areas in which I often see organizations struggle is supply chain and third-party risk. As you’ll recall, in August of 2023, over 2,600 organizations that deployed the MOVEit app contended with a data breach.

What lessons around due diligence can be learned here? What more could organizations have done? Certainly, CISOs shouldn’t just be giving information to third parties to process. CISOs need to be sure that data is being safeguarded to the right levels. If it’s not, organizational leaders should hold CISOs accountable.

If the third party hasn’t done full risk assessments, completed adequate due diligence and understood the information that they’ve got, then consider severing the business connection or stipulate that in order to do business, certain security requirements must be met.

The best litigation defense

In my view, the best means of avoiding litigation consists of improving preventative security by leveraging a unified platform that offers end-to-end visibility across your entire security estate. Select a platform with integrated AI capabilities, as these will help prevent and detect a breach that may be in-progress.

If an organization can demonstrate that they have deployed a security platform that adheres to industry best practices, that’s something that would enable an organization to effectively demonstrate compliance, even in the event of a data breach.

With cyber security systems that leverage AI-based mitigation, remediation and automation, the chances of a class-action will be massively reduced, as the organization will have taken significant and meaningful steps to mitigate the potentiality of a breach.

Reduce your organization’s breach probability, and moreover, limit the potential for lawsuits, criminal charges against your CISO and overwhelming legal expenditures. For more information about top-tier unified cyber security platforms, click here.

Kiborg: Arena Is An Action Roguelite That Looks Like Cyberpunk Sifu

Kiborg: Arena Is An Action Roguelite That Looks Like Cyberpunk Sifu

Developer Sobaka Studio has revealed Kiborg: Arena, a roguelike that looks like a cyberpunk Sifu. Revealed during today’s Guerrilla Collective showcase, Arena is a prequel to Kiborg and it’s coming to PlayStation 5, Xbox Series X/S, and PC (via Steam) sometime this summer. 

In the full game – Kiborg – players must fend of waves of foes as Morgan Lee, the leader of a ragtag group of resistance fighters on the prison planet of Sigma. “Strike with punishing hand-to-hand combat skills, blast mechanized soldiers with firearms, and deploy cybernetic-enhanced abilities to devastate bloodthirsty baddies. Learn to dodge, block, and parry adversaries to set them up for the killing blow.” It sounds and, given how it appears in the trailer below, looks a lot like Sifu, which is a great combat-centric brawler from 2022

Check out the Kiborg action in the trailer below:

[embedded content]

In Arena, players find “Morgan Lee, who sends the clones that fight on his behalf out on missions, fighting in Sigma’s Colisseum. Here, Morgan’s clones will square off in bloody battles against other denizens of the prison planet for the cash to help keep his resistance going.” Arena also features an Endless Mode, which Sobaka says spawns an infinite amount of increasingly difficult enemies to overcome.

In the full game, players will gather resources to unlock permanent upgrades, form alliances, and research forbidden technologies to gain the upper hand. 

Here are some Kiborg screenshots

Arena launches this summer on PlayStation 5, Xbox Series X/S, and PC. The full Kiborg game does not yet have a release date. 


What do you think about Kiborg? Let us know in the comments below!

Are RAGs the Solution to AI Hallucinations?

AI, by design, has a “mind of its own.” One drawback of this is that Generative AI models will occasionally fabricate information in a phenomenon called “AI Hallucinations,” one of the earliest examples of which came into the spotlight when a New York judge reprimanded lawyers…

Erez Druk, Co-Founder & CEO of Freed AI – Interview Series

Erez Druk is the Co-Founder & CEO of Freed AI. Freed’s AI transcribes patient visit discussions, identifying key terms to create organized notes, including SOAP (Subjective, Objective, Assessment, Plan) documentation. This saves time and allows the clinician to fully focus on the patient. Can you share…

LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images

The recent progress and advancement of Large Language Models has experienced a significant increase in vision-language reasoning, understanding, and interaction capabilities. Modern frameworks achieve this by projecting visual signals into LLMs or Large Language Models to enable their ability to perceive the world visually, an array…

AI pioneers turn whistleblowers and demand safeguards

OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI…