For the past few years, one of the highest compliments you could pay Google was to say it wasn’t making the same mistakes as Facebook. The two companies are similar in many

Bad: The company’s healthcare project was too quiet

In a July earnings call, Google said it was working with Ascension, a major healthcare system, to use cloud-based AI services to improve medical outcomes. But then it had little else to say about the effort until November, when the Wall Street Journal’s Rob Copeland reported on the vast scale of the project, code-named “Nightingale.” The report said that some Ascension employees were concerned about the privacy implications of giving Google employees access to patients’ health records. Google then published a FAQ spelling out how the two companies were working together, arguing that the work adhered to all regulations and disputing some aspects of media reports.

Even if Project Nightingale holds much promise to fulfill its goal of keeping people healthier, Google would have been well advised to anticipate people’s worst fears about the initiative and to dispel them as early as possible in the process—rather than trying to tamp them down after the Journal’s story appeared.

Good: YouTube’s content policies got more sensible

In June, YouTube finally banned neo-Nazi and white-supremacist videos, deleting thousands of them from the site. And in December, it broadened its rules against hate speech to encompass veiled attacks and insults based on factors such as race and sexual orientation. (Better late than never: Earlier in the year, after Vox writer Carlos Maza tweeted a supercut of conservative YouTuber Steven Crowder mocking him—over and over and over—for being gay and Latino, YouTube had maintained that Crowder’s attacks were acceptable.) In a December 60 Minutes appearance, YouTube CEO Susan Wojcicki also said that changes to its algorithm had decreased the amount of time Americans spend watching questionable videos, such as anti-vaccination material and miracle-cure hoaxes, by 70%.

Bad: Google’s content troubles remain many and varied

Early in the year, ex-YouTube creator Matt Hunter charged that pedophile rings were operating on the service and infesting the comments on videos showing children—a topic that later became the subject of a New York Times investigation by Max Fisher and Amanda Taub. And in December, The Verge’s Casey Newton reported that content moderators employed by Google and subcontractors must look at such horrifying imagery, in such vast volume, that it can lead to PTSD—a problem that isn’t alleviated by work policies allowing for frequent breaks. As usual, Google says that it takes such issues seriously and is working to minimize them—but at Google scale, even a minimized problem has major implications.

Good: Google is being thoughtful about AI ethics

Sundar Pichai once declared that AI will be a more profound breakthrough for humans than fire was in its day. But despite that enthusiasm, Google is acknowledging that AI, like fire, can be dangerous if it gets out of hand. In January, the company published a white paper saying that it welcomed government regulation on certain aspects of the technology, such as the need to disclose how an algorithm arrived at a particular decision. As Wired’s Tom Simonite has reported, Google is also being methodical how to roll out some of the AI functionality it’s built. For example, its facial-recognition service, which can identify celebrities, is available only to carefully screened customers.

Bad: Its AI advisory board was a fiasco

Seeking outside counsel on responsible use of AI sounds like a reasonable idea. But Google’s AI ethics board collapsed less than two weeks after its introduction in late March. Google employees protested the inclusion of the president of the conservative think tank the Heritage Foundation and the CEO of a maker of drones with military applications, and infighting and resignation precipitated the board’s official demise. Regardless of your opinion of specific members, the whole plan seemed ill-suited to holding Google accountable. The company says that the ethics board’s abrupt termination doesn’t mean that it’s lost interest in having outsiders play a part in guiding its use of AI. Here’s hoping the utter failure of its first attempt helps it figure out the right way.