4 ways University City School District fosters learning equity

Editor’s note: As part of the ExploreEDU event series, schools are working with Google for Education Premier Partners to throw open their doors and invite neighboring educators to learn from their firsthand experience using Google tools to innovate and improve. To see if there is an event near you, visit the ExploreEDU site. For those who can’t join in person, we’ve asked the host schools to share their experiences and tips in a blog post. Today’s guest author is Robert Dillon, Director of Innovation Learning at University City School District in the St. Louis area. They will host an ExploreEDU event on Dec. 6 with Tierney Brothers.

All students deserve an excellent, engaging education. A big part of our mission at University City School District is to bring rich learning experiences and digital resources to all of our kids, 70 percent of whom are affected by poverty daily. I want to share a few of the ways we’re designing a more equitable learning environment for our students.

1. Igniting positive risk-taking

Taking a new approach to learning requires shifting the mentality of teachers and administrators from compliance and fear to risk and innovation. This starts with senior leadership setting an example, creating a sense of urgency and communicating openly. Our superintendent and principals acknowledge there’s no single formula for creating change, and no one has all the answers; so we need to be willing to fail and to iterate. This culture of experimentation and transparency liberates teachers to try new things, and encourages the team to solve hard problems together. We’ve used Google Classroom as a platform for innovative teachers to gather across buildings to discuss ideas, provide feedback to our education technology solution partners, and decrease any sense of isolation in the district.

Sharing information is key to building trust and energy in the system.  We’re constantly talking with other districts, and bringing people together at events like ExploreEDU to break down the walls between educators in our region. We’re also meeting with all of our principals to talk about their moonshot ideas and the resources they might need to realize these changes.

2. Expanding capacity through the community

The district leadership team also harnesses the power of our community by enlisting parents to share their expertise with us. For instance, one student’s parent who previously led a nonprofit organization is helping my team coordinate parent focus groups to test new ideas surrounding learning academies, competency-based learning, and building a greater sense of belonging in our schools.

Other parents get involved by leading student groups: one parent who sees the learning power with teaching robotics leads our middle school robotics club. Other parents who are active in the arts connect us to community organizations and build relationships with their leaders so we make the most of our partnership. This extends our network of teachers and mentors, giving students access to a breadth of knowledge and experience.

3. Improving learning through technology

We’re able to try new approaches to learning because we have the tools to support it; we also recognize that learning comes first. We selected our technology platform to meet specific goals: increasing collaboration and teaching real-world skills.  Those goals drove us to choose Google for Education which we’ve used for over six years now to help students, teachers and administrators create and share information. In our fifth grade classes that are learning through robotics class, students use Google Docs to write stories about their experiences building robots. They now have the ability to share their stories with fifth graders across the region who are working on similar projects. The power of storytelling, and its application in the real world, is amplified when students have the tools to reach an audience beyond their class and teacher.

4. Encouraging student choice

A challenge to equity is giving students the flexibility to learn about topics they’re passionate about, in ways that work best for them. In social studies and elective classes in particular, teachers are introducing opportunities for students to choose projects that have local impact. For example, many families in our district live in food deserts, which means they have limited access to affordable, healthy food. One middle school class discussed this problem in the context of race and poverty. They proposed solutions: What if schools served as farmer’s markets, or donated surplus cafeteria food to families in need? It’s inspiring to see students learn by solving problems that are relevant to our community.

Achieving greater equity in learning starts with giving our kids everyday opportunities to close the experience gap. A lot of that has to do with having the attitude, partners, tools and autonomy to make these opportunities real.

4 ways University City School District fosters learning equity

Editor’s note: As part of the ExploreEDU event series, schools are working with Google for Education Premier Partners to throw open their doors and invite neighboring educators to learn from their firsthand experience using Google tools to innovate and improve. To see if there is an event near you, visit the ExploreEDU site. For those who can’t join in person, we’ve asked the host schools to share their experiences and tips in a blog post. Today’s guest author is Robert Dillon, Director of Innovation Learning at University City School District in the St. Louis area. They will host an ExploreEDU event on Dec. 6 with Tierney Brothers.

All students deserve an excellent, engaging education. A big part of our mission at University City School District is to bring rich learning experiences and digital resources to all of our kids, 70 percent of whom are affected by poverty daily. I want to share a few of the ways we’re designing a more equitable learning environment for our students.

1. Igniting positive risk-taking

Taking a new approach to learning requires shifting the mentality of teachers and administrators from compliance and fear to risk and innovation. This starts with senior leadership setting an example, creating a sense of urgency and communicating openly. Our superintendent and principals acknowledge there’s no single formula for creating change, and no one has all the answers; so we need to be willing to fail and to iterate. This culture of experimentation and transparency liberates teachers to try new things, and encourages the team to solve hard problems together. We’ve used Google Classroom as a platform for innovative teachers to gather across buildings to discuss ideas, provide feedback to our education technology solution partners, and decrease any sense of isolation in the district.

Sharing information is key to building trust and energy in the system.  We’re constantly talking with other districts, and bringing people together at events like ExploreEDU to break down the walls between educators in our region. We’re also meeting with all of our principals to talk about their moonshot ideas and the resources they might need to realize these changes.

2. Expanding capacity through the community

The district leadership team also harnesses the power of our community by enlisting parents to share their expertise with us. For instance, one student’s parent who previously led a nonprofit organization is helping my team coordinate parent focus groups to test new ideas surrounding learning academies, competency-based learning, and building a greater sense of belonging in our schools.

Other parents get involved by leading student groups: one parent who sees the learning power with teaching robotics leads our middle school robotics club. Other parents who are active in the arts connect us to community organizations and build relationships with their leaders so we make the most of our partnership. This extends our network of teachers and mentors, giving students access to a breadth of knowledge and experience.

3. Improving learning through technology

We’re able to try new approaches to learning because we have the tools to support it; we also recognize that learning comes first. We selected our technology platform to meet specific goals: increasing collaboration and teaching real-world skills.  Those goals drove us to choose Google for Education which we’ve used for over six years now to help students, teachers and administrators create and share information. In our fifth grade classes that are learning through robotics class, students use Google Docs to write stories about their experiences building robots. They now have the ability to share their stories with fifth graders across the region who are working on similar projects. The power of storytelling, and its application in the real world, is amplified when students have the tools to reach an audience beyond their class and teacher.

4. Encouraging student choice

A challenge to equity is giving students the flexibility to learn about topics they’re passionate about, in ways that work best for them. In social studies and elective classes in particular, teachers are introducing opportunities for students to choose projects that have local impact. For example, many families in our district live in food deserts, which means they have limited access to affordable, healthy food. One middle school class discussed this problem in the context of race and poverty. They proposed solutions: What if schools served as farmer’s markets, or donated surplus cafeteria food to families in need? It’s inspiring to see students learn by solving problems that are relevant to our community.

Achieving greater equity in learning starts with giving our kids everyday opportunities to close the experience gap. A lot of that has to do with having the attitude, partners, tools and autonomy to make these opportunities real.

Detecting diabetic eye disease with machine learning

Diabetic retinopathy — an eye condition that affects people with diabetes — is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. The disease can be treated if detected early, but if not, it can lead to irreversible blindness.

One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and determine whether there are signs of the disease, and if so, how severe it is. While annual screening is recommended for all patients with diabetes, many people live in areas without easy access to specialist care. That means millions of people aren’t getting the care they need to prevent loss of vision.

A few years ago, a Google research team began studying whether machine learning could be used to screen for diabetic retinopathy (DR). Today, in the Journal of the American Medical Association, we’ve published our results: a deep learning algorithm capable of interpreting signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in underserved communities with limited resources.

diabeticeye_900x500.jpg

Examples of retinal photographs that are taken to screen for DR. A healthy retina can be seen on the left; the retina on the right has lesions, which are indicative of bleeding and fluid leakage in the eye.

Working with a team of doctors in India and the U.S., we created a dataset of 128,000 images and used them to train a deep neural network to detect diabetic retinopathy. We then compared our algorithm’s performance to another set of images examined by a panel of board-certified ophthalmologists. Our algorithm performs on par with the ophthalmologists, achieving both high sensitivity and specificity. For more details, see our post on the Research blog.

We’re excited by the results, but there’s a lot more to do before an algorithm like this can be used widely. For example, interpretation of a 2D retinal photograph is only one step in the process of diagnosing diabetic eye disease — in some cases, doctors use a 3D imaging technology to examine various layers of a retina in detail. Our colleagues at DeepMind are working on applying machine learning to that method. In the future, these two complementary methods might be used together to assist doctors in the diagnosis of a wide spectrum of eye diseases.

Automated, highly accurate screening methods have the potential to assist doctors in evaluating more patients and quickly routing those who need help to a specialist. We hope this study will be one of many examples to come demonstrating the ability of machine learning to help solve important problems in healthcare.

Detecting diabetic eye disease with machine learning

Diabetic retinopathy — an eye condition that affects people with diabetes — is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. The disease can be treated if detected early, but if not, it can lead to irreversible blindness.

One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and determine whether there are signs of the disease, and if so, how severe it is. While annual screening is recommended for all patients with diabetes, many people live in areas without easy access to specialist care. That means millions of people aren’t getting the care they need to prevent loss of vision.

A few years ago, a Google research team began studying whether machine learning could be used to screen for diabetic retinopathy (DR). Today, in the Journal of the American Medical Association, we’ve published our results: a deep learning algorithm capable of interpreting signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in underserved communities with limited resources.

diabeticeye_900x500.jpg

Examples of retinal photographs that are taken to screen for DR. A healthy retina can be seen on the left; the retina on the right has lesions, which are indicative of bleeding and fluid leakage in the eye.

Working with a team of doctors in India and the U.S., we created a dataset of 128,000 images and used them to train a deep neural network to detect diabetic retinopathy. We then compared our algorithm’s performance to another set of images examined by a panel of board-certified ophthalmologists. Our algorithm performs on par with the ophthalmologists, achieving both high sensitivity and specificity. For more details, see our post on the Research blog.

We’re excited by the results, but there’s a lot more to do before an algorithm like this can be used widely. For example, interpretation of a 2D retinal photograph is only one step in the process of diagnosing diabetic eye disease — in some cases, doctors use a 3D imaging technology to examine various layers of a retina in detail. Our colleagues at DeepMind are working on applying machine learning to that method. In the future, these two complementary methods might be used together to assist doctors in the diagnosis of a wide spectrum of eye diseases.

Automated, highly accurate screening methods have the potential to assist doctors in evaluating more patients and quickly routing those who need help to a specialist. We hope this study will be one of many examples to come demonstrating the ability of machine learning to help solve important problems in healthcare.

Detecting diabetic eye disease with machine learning

Diabetic retinopathy — an eye condition that affects people with diabetes — is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. The disease can be treated if detected early, but if not, it can lead to irreversible blindness.

One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and determine whether there are signs of the disease, and if so, how severe it is. While annual screening is recommended for all patients with diabetes, many people live in areas without easy access to specialist care. That means millions of people aren’t getting the care they need to prevent loss of vision.

A few years ago, a Google research team began studying whether machine learning could be used to screen for diabetic retinopathy (DR). Today, in the Journal of the American Medical Association, we’ve published our results: a deep learning algorithm capable of interpreting signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in underserved communities with limited resources.

diabetic retinopathy

Examples of retinal photographs that are taken to screen for DR. A healthy retina can be seen on the left; the retina on the right has lesions, which are indicative of bleeding and fluid leakage in the eye.

Working with a team of doctors in India and the U.S., we created a dataset of 128,000 images and used them to train a deep neural network to detect diabetic retinopathy. We then compared our algorithm’s performance to another set of images examined by a panel of board-certified ophthalmologists. Our algorithm performs on par with the ophthalmologists, achieving both high sensitivity and specificity. For more details, see our post on the Research blog.

We’re excited by the results, but there’s a lot more to do before an algorithm like this can be used widely. For example, interpretation of a 2D retinal photograph is only one step in the process of diagnosing diabetic eye disease — in some cases, doctors use a 3D imaging technology to examine various layers of a retina in detail. Our colleagues at DeepMind are working on applying machine learning to that method. In the future, these two complementary methods might be used together to assist doctors in the diagnosis of a wide spectrum of eye diseases.

Automated, highly accurate screening methods have the potential to assist doctors in evaluating more patients and quickly routing those who need help to a specialist. We hope this study will be one of many examples to come demonstrating the ability of machine learning to help solve important problems in healthcare.

Saying goodbye to Content Keywords

In the early days – back when Search Console was still called Webmaster Tools – the content keywords feature was the only way to see what Googlebot found when it crawled a website. It was useful to see that Google was able to crawl your pages at all, or if your site was hacked.

In the meantime, you can easily check any page on your website and see how Googlebot fetches it immediately, Search Analytics shows you which keywords we’ve shown your site in search for, and Google informs you of many kinds of hacks automatically. Additionally, users were often confused about the keywords listed in content keywords. And so, the time has come to retire the Content Keywords feature in Search Console.

The words on your pages, the keywords if you will, are still important for Google’s (and your users’) understanding of your pages. While our systems have gotten better, they can’t read your mind: be clear about what your site is about, and what you’d like to be found for. Tell visitors what makes your site, your products and services, special!

What was your most surprising, or favorite, keyword shown? Let us know in the comments!

Toyota powers thousands of European showrooms with Chrome digital signage

Editor’s note: Today we hear from Steven Simons, IT Manager for Customer Retail and Product Systems at Toyota Motor, Europe. Read how the world’s largest automaker used Chrome digital signage to provide its showroom customers an innovative and immersive customer experience.

It’s no secret that the internet has transformed how people buy cars — a Toyota study shows an increasing number of people research online before visiting a retailer. In fact, the study found, most people purchase a car after visiting only one showroom. So at Toyota Motor Europe, we set out to create a more engaging customer experience by extending our customers’ digital travels into the showroom, and connect browsing online with seeing our cars in person.

We first experimented with digital signage in our showrooms in 2014 to display information about our cars in ways that reflected what customers saw online. However, the system we were using was expensive, unstable and difficult to maintain and manage.

Toyota TV on Chrome

So, we turned to Chrome in late 2015 and replaced our existing digital signage with Asus Chromeboxes connected to 42-inch flatscreen TVs. We manage and program all of the devices centrally from Toyota headquarters. Retailers just install the Chromeboxes and TVs, and they’re up and running. That way, retailers can focus on their customers rather than on technology.

The Chrome-based digital signage has become an important sales tool. It displays videos about Toyota vehicles, customized according to the showroom area where the signs are located. So, if a system is installed in a showroom where hybrid cars are popular, the videos highlight hybrids.

Salespeople use the screens to show customers in-depth information about Toyota vehicles. Thanks to Chrome, salespeople can easily answer customers’ technical questions about things like a car’s Bluetooth capabilities, leading to a smoother sales process. The signs also feature a car configurator, which allows customers to explore and personalize their vehicles. Consumers typically come in with plenty of online research in hand, and they can pick right back up with these configurations in store on our digital signage. Across Europe, 100,000 customers a month use the signage.

Toyota retail configurator on Chrome

We’ve deployed Chrome digital signage in 3,000 showrooms so far, and plan to install between 7,000 and 10,000 digital signs in total across 3,600 Toyota retailers in Europe. Google Cloud partner Fourcast worked with us on the deployment with a packaged, end-to-end solution, and ensured the systems were delivered on a tight, five-day timeframe.

The Chrome-based digital signage is more reliable and easier to deploy than the previous solution, reducing time spent on maintenance, management and troubleshooting. It also saves us on hardware and deployment costs.

Chrome-based digital signage has done everything we hoped it would. Its features let us show off what’s great about Toyota cars. It’s popular with sales staff and customers, as evidenced by increased usage since it was deployed. Retailer demand is greater than we estimated, showing that it’s an important sales enabler. Overall, the system is meeting our customers’ needs while reinforcing our reputation as a technically sophisticated company. Thanks to Chrome digital signage, our customers enjoy a more unified online and offline sales experience.

Toyota powers thousands of European showrooms with Chrome digital signage

Editor’s note: Today we hear from Steven Simons, IT Manager for Customer Retail and Product Systems at Toyota Motor, Europe. Read how the world’s largest automaker used Chrome digital signage to provide its showroom customers an innovative and immersive customer experience.

It’s no secret that the internet has transformed how people buy cars — a Toyota study shows an increasing number of people research online before visiting a retailer. In fact, the study found, most people purchase a car after visiting only one showroom. So at Toyota Motor Europe, we set out to create a more engaging customer experience by extending our customers’ digital travels into the showroom, and connect browsing online with seeing our cars in person.

We first experimented with digital signage in our showrooms in 2014 to display information about our cars in ways that reflected what customers saw online. However, the system we were using was expensive, unstable and difficult to maintain and manage.

Toyota TV on Chrome

So, we turned to Chrome in late 2015 and replaced our existing digital signage with Asus Chromeboxes connected to 42-inch flatscreen TVs. We manage and program all of the devices centrally from Toyota headquarters. Retailers just install the Chromeboxes and TVs, and they’re up and running. That way, retailers can focus on their customers rather than on technology.

The Chrome-based digital signage has become an important sales tool. It displays videos about Toyota vehicles, customized according to the showroom area where the signs are located. So, if a system is installed in a showroom where hybrid cars are popular, the videos highlight hybrids.

Salespeople use the screens to show customers in-depth information about Toyota vehicles. Thanks to Chrome, salespeople can easily answer customers’ technical questions about things like a car’s Bluetooth capabilities, leading to a smoother sales process. The signs also feature a car configurator, which allows customers to explore and personalize their vehicles. Consumers typically come in with plenty of online research in hand, and they can pick right back up with these configurations in store on our digital signage. Across Europe, 100,000 customers a month use the signage.

Toyota retail configurator on Chrome

We’ve deployed Chrome digital signage in 3,000 showrooms so far, and plan to install between 7,000 and 10,000 digital signs in total across 3,600 Toyota retailers in Europe. Google Cloud partner Fourcast worked with us on the deployment with a packaged, end-to-end solution, and ensured the systems were delivered on a tight, five-day timeframe.

The Chrome-based digital signage is more reliable and easier to deploy than the previous solution, reducing time spent on maintenance, management and troubleshooting. It also saves us on hardware and deployment costs.

Chrome-based digital signage has done everything we hoped it would. Its features let us show off what’s great about Toyota cars. It’s popular with sales staff and customers, as evidenced by increased usage since it was deployed. Retailer demand is greater than we estimated, showing that it’s an important sales enabler. Overall, the system is meeting our customers’ needs while reinforcing our reputation as a technically sophisticated company. Thanks to Chrome digital signage, our customers enjoy a more unified online and offline sales experience.

Toyota powers thousands of European showrooms with Chrome digital signage

Editor’s note: Today we hear from Steven Simons, IT Manager for Customer Retail and Product Systems at Toyota Motor, Europe. Read how the world’s largest automaker used Chrome digital signage to provide its showroom customers an innovative and immersive customer experience.

It’s no secret that the internet has transformed how people buy cars — a Toyota study shows an increasing number of people research online before visiting a retailer. In fact, the study found, most people purchase a car after visiting only one showroom. So at Toyota Motor Europe, we set out to create a more engaging customer experience by extending our customers’ digital travels into the showroom, and connect browsing online with seeing our cars in person.

We first experimented with digital signage in our showrooms in 2014 to display information about our cars in ways that reflected what customers saw online. However, the system we were using was expensive, unstable and difficult to maintain and manage.

Toyota TV on Chrome

So, we turned to Chrome in late 2015 and replaced our existing digital signage with Asus Chromeboxes connected to 42-inch flatscreen TVs. We manage and program all of the devices centrally from Toyota headquarters. Retailers just install the Chromeboxes and TVs, and they’re up and running. That way, retailers can focus on their customers rather than on technology.

The Chrome-based digital signage has become an important sales tool. It displays videos about Toyota vehicles, customized according to the showroom area where the signs are located. So, if a system is installed in a showroom where hybrid cars are popular, the videos highlight hybrids.

Salespeople use the screens to show customers in-depth information about Toyota vehicles. Thanks to Chrome, salespeople can easily answer customers’ technical questions about things like a car’s Bluetooth capabilities, leading to a smoother sales process. The signs also feature a car configurator, which allows customers to explore and personalize their vehicles. Consumers typically come in with plenty of online research in hand, and they can pick right back up with these configurations in store on our digital signage. Across Europe, 100,000 customers a month use the signage.

Toyota retail configurator on Chrome

We’ve deployed Chrome digital signage in 3,000 showrooms so far, and plan to install between 7,000 and 10,000 digital signs in total across 3,600 Toyota retailers in Europe. Google Cloud partner Fourcast worked with us on the deployment with a packaged, end-to-end solution, and ensured the systems were delivered on a tight, five-day timeframe.

The Chrome-based digital signage is more reliable and easier to deploy than the previous solution, reducing time spent on maintenance, management and troubleshooting. It also saves us on hardware and deployment costs.

Chrome-based digital signage has done everything we hoped it would. Its features let us show off what’s great about Toyota cars. It’s popular with sales staff and customers, as evidenced by increased usage since it was deployed. Retailer demand is greater than we estimated, showing that it’s an important sales enabler. Overall, the system is meeting our customers’ needs while reinforcing our reputation as a technically sophisticated company. Thanks to Chrome digital signage, our customers enjoy a more unified online and offline sales experience.

Our most detailed view of Earth across space and time

In 2013, we released Google Earth Timelapse, our most comprehensive picture of the Earth’s changing surface. This interactive experience enabled people to explore these changes like never before—to watch the sprouting of Dubai’s artificial Palm Islands, the retreat of Alaska’s Columbia Glacier, and the impressive urban expansion of Las Vegas, Nevada. Today, we’re making our largest update to Timelapse yet, with four additional years of imagery, petabytes of new data, and a sharper view of the Earth from 1984 to 2016. We’ve even teamed up again with our friends at TIME to give you an updated take on compelling locations. 

Miruuixiang

Meandering river in Nyingchi, Tibet, China [view in Timelapse] (Image credit: Landsat / Copernicus*)

Leveraging the same techniques we used to improve Google Maps and Google Earth back in June, the new Timelapse reveals a sharper view of our planet, with truer colors and fewer distracting artifacts. A great example of this is San Francisco and Oakland in California:

Bay Bridge

San Francisco – Oakland Bay Bridge reconstruction [view in Timelapse] (Image credit: Landsat / Copernicus*)

There’s much more to see, including glacial movement in Antarctica, urban growth, forest gain and loss, and infrastructure development:

Using Google Earth Engine, we sifted through about three quadrillion pixels—that’s 3 followed by 15 zeroes—from more than 5,000,000 satellite images. For this latest update, we had access to more images from the past, thanks to the Landsat Global Archive Consolidation Program, and fresh images from two new satellites, Landsat 8 and Sentinel-2.

We took the best of all those pixels to create 33 images of the entire planet, one for each year. We then encoded these new 3.95 terapixel global images into just over 25,000,000 overlapping multi-resolution video tiles, made interactively explorable by Carnegie Mellon CREATE Lab’s Time Machine library, a technology for creating and viewing zoomable and pannable timelapses over space and time.

Ft. McMurray

Alberta Tar Sands, Canada [View in Timelapse] (Image credit: Landsat / Copernicus*)

To view the new Timelapse, head over to the Earth Engine website. You can also view the new annual mosaics in Google Earth‘s historical imagery feature on desktop, or spend a mesmerizing 40 minutes watching this YouTube playlist. Happy exploring!

*Landsat imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey. Images also contain modified Copernicus Sentinel data 2015- 2016.