Has anyone seen this yet: https://companion.ai/einstein Einstein is an AI with a computer. He logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework — automatically.
We just learned of this today - I am waiting to hear back from our CSM on how we can prevent students from using this and similar AI tools.
We have also asked our CSM for more information.
This tool will either access the Canvas API via a user-generated access token, or will interact with the Canvas UI via some kind of Chrome automation (which leverages the user's own browser session). The first type of access could theoretically be prevented (by disallowing user-generated access tokens), by blocking the bot's user-agent (assuming that they are using a proper user-agent string… haha), or by trying to block the IP addresses used by this system (which would be a game of cat-and-mouse at best). The second type of access, via Chrome, would be nearly impossible to detect or block from the server side.
I wonder if this could be a phishing exercise to get student (and potentially curious faculty) credentials so nefarious folks can access "all the things." 🧐
Has anyone heard anything from their CSM regarding this?
I just heard about this earlier today and am quite worried if this is how it seems to be. Have you been able to figure out more information about this?
Possibly related read, if this tool uses tokens (Oct. 2025) - Strengthening Security in Canvas: Updates to User Access Token Management: https://community.instructure.com/en/discussion/660299/strengthening-security-in-canvas-updates-to-…
Waiting for a reply. Guessing they'll just recommend limiting personal access token creation to admins, restricting students and observers from creating personal access tokens, and employing multi-factor authentication and instructure do nothing on their end. GPTatlas still isn't flagged in user page view logs.
This is what our CSM said: "TLDR: It looks like this is another browser based AI tool someone developed, which these seem to be popping up weekly at this point. I empathize with educators during this time. Tools (such as this one) really fall into two broad categories: browser extension-based and API-based.
The main point here is that all of these tools require a learner to give up their credentials so the tool can access their data in Canvas. It is no different than a learner hiring a really smart student to complete assignments and take assessments in their place. It's academic dishonesty using a person or a tool to impersonate or pose as the student, and its not good. Several of our partners, such as Respondus, provide tools that lock-down browser activity and can prevent the use of browser extensions. Other partners, like TurnItIn, provide tools that can provide partial detection of AI-generated content. If you have not done so already, we also recommend that institutions concerned about API-powered tools enable the “Admin Manage Access Token” feature in Canvas, and use the “Limit Personal Access Tokens to Admins” feature to limit API token creation to administrators."
Einstein by Companion.AI is actually neither an API tool nor a browser extension. It operates as an autonomous AI agent running on its own virtual computer.
With so many AI tools dropping as extensions or integrations right now, it is completely understandable to assume it falls into one of those categories, but this one functions a bit differently.
Instead of integrating into your existing browser or requiring a backend API hook to access a platform, Einstein functions essentially as a remote, digital stand-in for a student.
Because Einstein mimics a standard user web session so closely, it operates entirely outside of the student's personal devices. This has sparked massive controversy in the higher education space, as it makes the tool incredibly difficult for school networks or anti-cheat software to detect via standard login monitoring.
Now we just need for the Teacher version to drop to cover all our bases. >:(
Does anyone know if the use of multi-factor authentication does limit the students ability to use this, especially if it's required as part of an SSO setup?
You can create virtual devices for SSO purposes so I imagine it would be more of a speed bump then a solution.
@RobbieGrant Thank you for starting a conversation about the "Einstein" agent (Companion AI) and sharing your insights and expertise as you learn more. You've been a long-time member of this Community (and friend) so I respect your sharing so much! I'm confident we share the same frustration in seeing tools marketed this way - explicitly designed to bypass genuine learning.
Many of us have been following the conversation here and I wanted to provide some additional context on how we’re thinking about these types of tools as well as continue the conversation of how we can tackle these emergent agents as a community.
1. Our focus: Meaningful learning over "Cat-and-Mouse" games. It’s a reality of our era that new AI agents will continue to emerge. While it’s tempting to want to block every new agent that pops up, we’ve been clear in our philosophy: we are focusing our energy on supporting good teaching and learning rather than trying to "out-tech" every single cheating tool.
History shows that technical "arms races" often lead to a cycle of detection and evasion that doesn't actually improve the student experience. Instead, we want to empower you with pedagogical strategies - like authentic assessment and iterative feedback - that make tools like this less effective and less tempting for students.
2. A critical lesson in Security and Privacy. While the "cheating" aspect is the most visible problem, this specific app is actually a perfect case study for a different, perhaps more urgent, conversation we need to have with students: Personal Security.
If you look closely at how this agent functions, it is extremely unsafe. To work as advertised, these types of "companions" often require students to grant them "God-mode" access to their browser, their Canvas login credentials, or even their personal data.
Moving Forward. I believe the best long-term defense against these cheating tools is going to be through human connection and creating "teachable moments" with our students.
We would love to hear how all of you are handling this on the ground. Please share your thoughts and experiences:
We're already starting to see some real thought leadership in the Artificial Intelligence in Education space of this Community, let's continue to channel our powers for good there!
@RobbieGrant Did you actually test this? My assumption is that "credentials" could just mean API access token, as that is basically the same thing as a username and password, but seems safer to students who do not know better.
I'm very interested in knowing more about this and especially, if any admins or staff have tried using it.
When a student connects one of these tools to their Canvas account, the tool can access all content from that course (assignments, quizzes, instructions, files, and other materials) using the student’s permissions. Even if that access is technically “authorized” through the student’s account, instructors have no visibility into what happens after that access occurs. I am very concerned about course materials being copied or used outside of Canvas without transparency or control.
Specifically, I am concerned that these tools can copy, store, scrape, use, or process my course materials without my knowledge or consent. As the creator of my course content, I consider this my intellectual property. I did not agree to have it exported, replicated, or potentially used to train AI systems simply because a student chose to connect an outside tool, and I have course policies stating as such.
No, we have not tried it as using this tool violates NMSU Administrative Rules and Procedures on sharing credentials. We are working this through our CISO and other channels. It's use by students would also violate the Student Code of Conduct because would be considered fraud.
Do any of you know for sure whether this particular tool connects by students making a token or are they required to give over their username and password?
Yep you are spot on!
From the research we have done is say it requires students to hand over full login credentials.
@RobbieGrant I tried signing up for it with a dummy account I have just to see what it actually asks the user for, but they asked for a credit card in order to do the free trial and that was a no-go for me.
"Moving Forward.
I believe the best long-term defense against these cheating tools is going to be through human connection and creating "teachable moments" with our students.
So, basically, you're waving a white flag and the idea of using an LMS is becoming obsolete. I am so sick of hearing this same answer from admins, everywhere. A student who wants to use Einstein does not care about the human connection and teachable moments. You're just admitting you've lost and don't have a solution to this.
What is your proposal for how Instructure (or any LMS operator) should deal with this?
@jdr04070 I completely hear your frustration. Please know the 'Moving Forward' section was simply meant to share my personal perspective from my years in the classroom. As educators, we're navigating a massive shift together, and you've beautifully highlighted an age-old challenge: how do we reach the students who just want to get by, whether they turn to technology or the classmate next to them? That’s exactly why I believe in the power of community, and why I included that last section. I genuinely want to hear how other educators are adapting to connect with these learners.
Related to this, I read this post yesterday about how an LMS could detect AI agents completing coursework. Has Instructure explored any similar techniques? https://thisisntfine.substack.com/p/yes-you-can-detect-ai-agents-in-your?utm_campaign=post&utm_medium=email&triedRedirect=true
Did anyone else reach out to the company and ask questions like we did? The language on their home page seems to have changed…
@jdr04070 I must admit, I was looking for a different answer as well.
For students who do not care about the connection or teaching moments, we're leaning more towards the security risks these students are facing by installing OpenClaw (which apparently can have full control of your device) and providing their full Canvas credentials (which often provide access to their full institution access).
When we finally get language together, I will see if there is enough language that is not specific to Cornell and I will try to remember to share it here. If others have language their institutions are sharing out, please feel free to share it here for others.
This is a HUGE red flag, and exactly why educators should be teaching students about the responsible use of AI. If we don't understand the good side, we won't be able to recognize the bad when it shows up (hint, Companion AI). Here is a Substack blog & video where she creates the account, downloads the wrapper (hint, it's Open Claw), and then begins the process to launch the tool. https://michellekassorla.substack.com/p/the-first-agentic-ai-lms-killer-is
This is what our CMS sent the evening of 2/23 after we found out about the tool. Thank you for flagging this. We are aware of Einstein by Companion.ai and similar "agentic" AI tools. They are certainly a concern as they represent a shift from simple text generation to full task automation. I’ve consulted with our internal teams, and I want to be direct about the current landscape:
I suggest highlighting the security and credential-sharing risks to faculty and students. This is often the most effective way to deter usage while we work on the larger issue.
@james_whalley Sharing language would be excellent. When you have that ready to share, remind me and we can break this thread off for others to share and riff on as well.
You are spot on about having course policies stating the terms of use for your course materials. This gives the necessary legal notice to your students about how they can use your content.
An AI tool is not required to copy or store your course materials without consent. A moderately-low level of tech savvy is all a student needs to export/print/save/store all the contents out of a course in an LMS or any other source on the internet, just like a shoplifter can walk out of a store without paying. Because they can does not make it ethical or legal.
Thanks @james_whalley and @Renee Carney. I'd be interested in seeing any language you can share as our institution is also looking to draft something.
An arm's race is exactly how I described Einstein AI to a colleague today. On the bright side, there are starting to be solutions on the market now, albeit very bleeding edge solutions, to combat this. For example, LightLeaf’s Atlas Interceptor is a new plug‑in for Canvas that shows promise in helping reduce cheating and misuse of AI tools in online courses. After seeing a demo recently, here's a video overview and also some highlights:
I'm sorry to say it, but this sounds a lot like abdication of responsibility. Tools like Einstein strike not just at the heart of effective education, but also at the heart of Instructure's business model. I am surprised and disappointed to see in this post, and also in @Zach Pendleton's post below, that Instructure's strategy is to make it the problem of instructors and third-party vendors.
Quick update… seems the service may have something going on…
Well… Well done, Advait. I think it worked - Ha!
https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/02/26/agentic-ai-can-complete-whole-courses-now?utm_source=Inside+Higher+Ed&utm_campaign=e703c64873-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-e703c64873-236040337&mc_cid=e703c64873&mc_eid=affc47a4a2
"Earlier this week, Paliwal—a 23-year-old tech entrepreneur who dropped out of the computer science master’s program at Brown University in 2024—launched Einstein, an agentic AI tool specifically designed to connect with the popular learning management system Canvas."
"Paliwal wants to make the public aware of agentic AI’s potential to upend higher ed, and he saw creating Einstein and marketing it as a cheating tool as the best way to incite rage and focus attention on the issue. “My fear was that if no one realizes the capability of this project, then the right change won’t happen,” he said. “Or if it happens it’ll be too late.”"
I confirmed this with their support email
If this software acts as a virtual computer, it may be able to get into Respondus.
According to Respondus’ official support documentation, students are not permitted to run LockDown Browser if any virtual machine software is detected, including VMware, VirtualBox, Parallels, Windows emulators, thin apps, or other virtualization-related drivers. When detected, the browser shows a warning and will not launch. https://support.respondus.com/hc/en-us/articles/4409604116123-I-receive-a-warning-The-browser-cant-be-used-in-virtual-machine-software-such-as-Virtual-PC-VMWare-and-Parallels
People are reporting that it's a wrapper for OpenClaw, which is designed to run locally. You may be right (and I hope you are) that they can detect this activity, but I've reached out to Respondus to get more information on how they're addressing this.
While The Companion AI site may be coded 404 for now, I hope this discussion continues as the agentic phase of AI arrives. In addition to the agents, as they come online, we will also have tons of opportunistic copycats and soundalikes (https://www.learneinstein.com/) to deal with, and this information in this thread has been very informative.
Did Companion.ai pull the plug on their Einstein product? That page now gets a 404 error.
Wonder if it's due to infringement on Salesforce's AI tool of the same name, or that shining a light makes cockroaches scatter.
Probably something to do with this.https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/02/26/agentic-ai-can-complete-whole-courses-now
After this story was published, Paliwal said he received a cease-and-desist letter from Instructure, which owns Canvas, and has since taken down Einstein’s website.
We have also heard it could be related to this as well: https://www.learneinstein.com
Robbie
If you click the Back to Home link, it takes you to https://companion.ai/ which has a link to Pricing. It appears you can still access Einstein, but it takes a few more clicks and a payment.
Companion.ai is still on (just without /einstein)
likely. it's now companion.ai without /einstein.
I'm interested in creating a dedicated space for discussing AI applications in the AEC industry, including generative design, automated monitoring, and AI-driven project management via AI Agentic workflow. I'm seeking advice on the use of Infrastructure Community to build an active and engaging group focused on sharing…
Importante a música na vida das pessoas. Pode ajudar superar dificuldade por meio do uso de instrumentos que possa valorizar a poesia e a criatividade musical. A inteligência artificial tem sido esse instrumento de libertação da poesia. Por experiência de produção de muita poesia que jamais seria lida. Posso afirmar que…
Hi all, I am trying to utilize the Smart Search API, however, I am running into an unexpected error. When discussing this with a few technical leads at InstructureCon, we understood that the API endpoint should be accessible regardless of whether or not Smart Search is turned on for the account. However, when hitting the…
I thought I'd try to leverage Chat GPT (v3.5) to help me make some interactive web based resources that I could embed in Canvas pages. I have very little knowledge of coding and certainly no JavaScript skills, but I reckon that ChatGPT can do the heft whilst I guide it in the right direction with a bit of human…
AI is a wonderful technology, it has a learning process through machine learning. When using AI ChatPDF, it is good you realized the main ideas from any article you uploaded there. Such ideas help learning to brainstorm their minds before coming with a topic of their own experiences.