Your new post is loading...
Your new post is loading...
|
Scooped by
John Evans
|
Classroom Challenges are lessons that support teachers in formative assessment. There are 100 lessons in total, 20 at each grade from 6 to 8 and 40 for ‘Career and College Readiness’ at High School Grades 9 and above. Some lessons are focused on developing math concepts, others on solving non-routine problems.
This special report delves into the convergence of education and artificial intelligence, offering a comprehensive guide for educators navigating this transformative landscape.
Via EDTECH@UTRGV
|
Scooped by
John Evans
|
In case you need a reminder that you are now old, here’s this: Kid President just turned 20.
While new and perhaps useful, ChatGPT lacks the substance educators should be encouraging in their students' writing.
Via EDTECH@UTRGV
This post is loaded with some simple ways to integrate AI in the classroom. It's easier than you think, and it doesn't require expensive equipment or software. AI or Artifical Intelligence is ready for its closeup! Are you and your students ready to become "AI experimenters."
Via Yashy Tohsaku
Decktopus is an AI presentation maker, that will create amazing presentations in seconds. You only need to type the presentation title and your presentation is ready.
Via Nik Peachey
"The phrase “artificial Intelligence” was coined by pointy-heads at MIT in 1955. Back then, it referred to an obscure field of computer science devoted to then-hypothetical programs that could engage in tasks that “require high-level mental processes such as: perceptual learning, memory organization, and critical reasoning.” Fast-forward to 2023: While AI has been a murmur in tech circles for the last few years, those conversations really get loud until the commercial release of products like Chat GPT and DALL-e. Now everyone is talking about AI, everywhere you go—hyping it, demonizing it, fearing it—but most of all, misunderstanding it. This is partly because it’s a complex subject—we don’t even agree on what “intelligence” is, let alone “artificial intelligence”—but another reason so many are getting AI wrong essentially comes down to that familiar villain capitalism. With the explosion in popular interest, advertisers and marketers are using terms like “AI,” “AI-powered,” and “artificial intelligence” as a selling point so much, they’re beginning to lose what little meaning they once had. When you read that something is “AI-powered,” don’t believe the hype.
Via Alfredo Calderón
|
Scooped by
John Evans
|
A decade after its widespread adoption, it’s safe to say that U.S. schools utterly failed to acclimate our students to social media and to anticipate the profound damage it could do. The result of widespread social media use among students (said the U.S. surgeon general in a recent advisory) is increased anxiety, stress, and depression. Parents are lost, too. As a school technology specialist working with a population of middle- and high-schoolers, I see firsthand parents’ desperation when I host standing-room-only sessions about social media and mental health.
With the swift emergence of A.I., educators have an opportunity to do better.
This summer, K–12 schools must get to work drafting academic policies governing the use of A.I. and facilitating professional development for teachers about the new technology. Educators need to address this trend head-on with students while simultaneously redesigning instruction and assessment in an age of A.I. We cannot start this work quickly enough.
Explore how using AI chatbots can transform your school year! Dive into some of the tips in my new quick reference guide for educators.
Via Yashy Tohsaku
Today, many priorities for improvements to teaching and learning are unmet. Educators seek technology-enhanced approaches addressing these priorities that would be safe, effective, and scalable. Naturally, educators wonder if the rapid advances in technology in everyday lives could help. Like all of us, educators use AI-powered services in their everyday lives, such as voice assistants in their homes; tools that can correct grammar, complete sentences, and write essays; and automated trip planning on their phones. Many educators are actively exploring AI tools as they are newly released to the public. Educators see opportunities to use AI-powered capabilities like speech recognition to increase the support available to students with disabilities, multilingual learners, and others who could benefit from greater adaptivity and personalization in digital tools for learning. They are exploring how AI can enable writing or improving lessons, as well as their process for finding, choosing, and adapting material for use in their lessons. Educators are also aware of new risks. Useful, powerful functionality can also be accompanied with new data privacy and security risks. Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are wary that the associations or automations created by AI may amplify unwanted biases. They have noted new ways in which students may represent others’ work as their own. They are well-aware of “teachable moments” and pedagogical strategies that a human teacher can address but are undetected or misunderstood by AI models. They worry whether recommendations suggested by an algorithm would be fair. Educators’ concerns are manifold. Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in EdTech. To develop guidance for EdTech, the Department works closely with educational constituents. These constituents include educational leaders—teachers, faculty, support staff, and other educators—researchers; policymakers; advocates and funders; technology developers; community members and organizations; and, above all, learners and their families/caregivers. Recently, through its activities with constituents, the Department noticed a sharp rise in interest and concern about AI. For example, a 2021 field scan found that developers of all kinds of technology systems—for student information, classroom instruction, school logistics, parent-teacher communication, and more—expect to add AI capabilities to their systems. Through a series of four listening sessions conducted in June and August 2022 and attended by more than 700 attendees, it became clear that constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together. In late 2022 and early 2023, the public became aware of new generative AI chatbots and began to explore how AI could be used to write essays, create lesson plans, produce images, create personalized assignments for students, and more. From public expression in social media, at conferences, and in news media, the Department learned more about risks and benefits of AI-enabled chatbots. And yet this report will not focus on a specific AI tool, service, or announcement, because AI-enabled systems evolve rapidly. Finally, the Department engaged the educational policy expertise available internally and in its relationships with AI policy experts to shape the findings and recommendations in this report.
Via Edumorfosis, Jim Lerman
Artificial Intelligence (AI) has taken the world by storm, with new AI-powered tools such as ChatGPT opening up new opportunities in higher education for content creation, communication, and learning, while also raising new concerns about the misuses and overreach of technology. Our shared humanity has also become a key focal point within higher education, as faculty and leaders continue to wrestle with understanding and meeting the diverse needs of students and to find ways of cultivating institutional communities that support student well-being and belonging. For this year’s teaching and learning Horizon Report, then, our panelists’ discussions oscillated between these seemingly polar ideas: the supplanting of human activity with powerful new technological capabilities, and the need for more humanity at the center of everything we do. This report summarizes the results of those discussions and serves as one vantage point on where our future may be headed.
Via Edumorfosis
|
Scooped by
John Evans
|
The report by Dr. Vivek Murthy cited a “profound risk of harm” to adolescent mental health and urged families to set limits and governments to set tougher standards for use.
|
Scooped by
John Evans
|
Generative art—art produced by AI—means you no longer need to be an accomplished artist to produce fantastic images in whatever style you choose. To create AI art, you would typically create an account with a third-party company and have generated images served to you over the web.
But did you know you can create unlimited AI images on your own PC?
|
|
Scooped by
John Evans
|
It's here, friends! Our November Choice Board where reading, learning, making, coding, technology and music can take you on adventures all month long.
|
Scooped by
John Evans
|
Recent developments in artificial intelligence are changing how the world sees computing and challenging computing educators to rethink their approach to teaching. In this issue of Hello World, we tackle the big questions about AI and computing education, such as what AI literacy is and how we teach it. Our writers explore a range of topics including gender bias in AI and what we can do about it; how to speak to young children about AI; and why anthropomorphism hinders learners' understanding of AI. Our feature articles also include a research digest on AI ethics for children, and of course practical examples of how you can incorporate AI lessons in your classroom. AI is a topic we’ve addressed before in Hello World, and we'll keep covering this rapidly evolving field in future. Hello World issue 22 is a comprehensive snapshot of the current landscape of AI education. Download FREE PDF
Through these efforts, educators can embrace the opportunities provided by technology and create dynamic and effective learning environments for all students.
Via EDTECH@UTRGV
Our devices can be anything we want them to be. If we want them to be beguiling, and dangerous, they will end up as bogeymen. But we deserve better, as do our children. The solution to mental illness and a fraying social fabric will not be impractical, hobbled devices, or “unplugged” vacations that only the rich can afford. It will begin with a new, rational, national discussion of the way we live now, and the way we want to live, devices and all.
Via Nik Peachey
|
Scooped by
John Evans
|
"This is actually a well-documented idea called the ELIZA Effect, where people attribute human-like intelligence and emotions to computer programs, even when they know the responses are generated by simple algorithms. This phenomenon is named after ELIZA, an early A.I. program developed in the 1960s by Joseph Weizenbaum at MIT. Despite the program’s limited capabilities, users often formed emotional connections with ELIZA and attributed understanding and empathy to the program. The ELIZA Effect highlights our tendency to anthropomorphize technology and perceive more intelligence in A.I. systems than may be present. In the case of ELIZA, people knew it was a machine.
What’s fascinating is just how easily humans can be duped by machines. Part of this is due to innate pattern recognition. We have a natural cognitive bias toward finding patterns and attributing causality even when the data is random. When an A.I. produces responses that resemble human communication, our brains recognize the patterns and can be convinced that there’s a human behind the interaction. It just feels more human. Moreover, humans tend to have a default toward trust.
All of this points toward the need for students to understand the difference between an A.I.’s information processing and human cognition."
Artificial intelligence is arguably the most important technological development of our time – here are some terms you should know as the world wrestles with this new technology.
Via Yashy Tohsaku
Understanding how to best support students can be confusing and complex. To deal with this complexity, we develop naive theories and unscientific beliefs about what helps students learn. The science of learning has tested many of these beliefs, and evidence now shows that some of our most common beliefs about learners are wrong. These misconceptions about learning may cause us to waste some of our instructional time and effort—or, even worse, impair students’ learning.
Below, I will describe three common myths about learning, debunk those myths, and reinterpret those ideas according to our most up-to-date evidence.
Via Edumorfosis
In this podcast episode, I interview Angela Daniel who works at the MIT Step Lab. In this in-depth interview, Angela discusses AI, deeper learning, and the need for students to think about the nature of
Via Yashy Tohsaku
|
Scooped by
John Evans
|
We believe teachers can use ChatGPT to increase their students' motivation for learning and actually prevent cheating. Here are three strategies for doing that.
Asking students not to use ChatGPT for their work is counterproductive in today’s landscape. As teachers, we should be educating them on HOW, WHEN and WHY to use it.
Via Edumorfosis
|
Scooped by
John Evans
|
Well, here we are in May! Many of you have already finished out the school year, and several of you are in the final stretch. Whichever category you fall into, I hope you take a moment to try out these five new AI tools for educators. Whether you love it, loathe it, or aren’t quite sure about it, take a moment to give some AI tools a try to see what you think. For me, AI is becoming a trusty brainstorming buddy! Let’s dive in to this month’s collection.
|
Scooped by
John Evans
|
We asked educators and experts on all sides of the broader debates about ChatGPT to give us some strategies for AI-proofing assignments. Here’s what they told us:
|