Video: K-12 Cybersecurity in the Age of AI: Taking Back Control | Duration: 2820s | Summary: K-12 Cybersecurity in the Age of AI: Taking Back Control | Chapters: Welcome and Introduction (0s), EdTech and AI (305.15s), Rapid AI Adoption (438.025s), Data Control Challenges (656.92s), AI in Education (871.445s), Chromebook Security Concerns (1280.4149s), Data Control Settings (1435.8949s), Product Features Overview (1564.5s), Browser Security Solutions (1974.745s), Open Q&A Session (2048.94s), AI in Education (2106.65s), Implementation Support Process (2193.5798s), Itopia's Company Background (2343.285s), Data Privacy Compliance (2477.645s), Conclusion and Wrap-up (2707.335s)
Transcript for "K-12 Cybersecurity in the Age of AI: Taking Back Control":
Hello, everybody, and welcome to our webinar, k 12 cybersecurity in the age of AI, taking back control. I'm Christine Weiser. I'm the content director for tech tech and learning, and I'm very happy to host today's conversation sponsored by our friends at Itopia. So we're gonna be talking about some, some good stuff today. It's no surprise that we have heard a lot about AI these days. But one thing we hear less about is about making sure all of these new tools are not putting any of our sensitive school data at risk. So that was why I was so impressed when I learned about a new product from Mytopia called SecureClass that offers customizable access controls so district administrators can really decide what what exactly is allowed, access and what's not allowed. And we're gonna take a deep dive into some very real threats that schools are exposed to and some ways that SecureClass can protect your school from these threats and enable administrators to take charge of their cybersecurity strategy, ensure proactive threat prevention, and adopt AI tools responsibly. So we've got lots to cover, but first, I'm gonna go over a few tips, to basically help you navigate the webinar platform and make the most of your time here. So we are going to first go to that chat tab on the far right hand side of the screen. So this is where we can say hello. We can connect with fellow attendees and share any thoughts or feedback. So let's give it a try. If you haven't already, let us know where you're from. Post a hello. Add an emoji. So we're here together to learn as a community, so let's get to know each other. Don't be shy. Alright. So, the this is actually the chat tab is also where you're gonna go. If you need any help from us, if you have any tech problems, anything like that, you can put a question in chat. And, also, if you have technical issues hello, Jerry. Welcome. Then you can ask us about that there, but, also, just so you know, a quick browser refresh will actually solve most problems. Hello, Mike. I see Mike. One of them and Lisa. I see some of our advisers. Jennifer, hello. Welcome. Welcome. Welcome. Great to have you all here. Thank you. So, we also wanted to let you know that we have closed captions available on the platform. So if you need closed captioning, just click on the CC button, and that's gonna add the closed captioning. Hello, Jim. Good to see you. And Tim and Cali. Oh, I'm seeing so many friends here. I love it. So and you can also, translate into a variety of languages in that closed caption option. The other place I wanna go over to is the q and a section, so So we know the best way to get involved and, engage with our speakers is to ask some questions. So that q and a tab is where you're going to be, asking questions to our advisors. And we are going to do our best to get to as many questions as we can through live chat, but we do have a lot of information to cover. So if we don't get to all of your questions, not to worry. We're gonna send them all to our speakers, and we'll get, answers to you by email. And the very last stop on our tour here is the docs tab. So our friends at Itopia have added some ham helpful handouts there, to accompany our conversation. So take a moment to click on that doc section and explore. And, also, reminder to stick around to the end because we will be announcing the winner of our $250 Amazon gift card. So with that, we are ready to get started, and I'm going to introduce our fabulous speakers today. So first, we have Kyle Berger, who is the CTO of Grapevine Colleyville ISD in Texas. Kyle is a longtime friend and advisor to tech and learning as well as the winner of our innovative leader award. So I am thrilled that he can join us today to lend his insight as a cutting edge school district administrator. And we also have Jena Draper, who's the chief innovation officer at Itopia. I have followed Jena very impressive career since she founded the groundbreaking catch on product, and I'm super excited to learn more about her latest work with SecureClass at Itopia. So with that, I'm gonna hand it over to Kyle and Jena Yeah. I'm gonna walk you up. Some of the threats and to talk a little bit about how SecureClass can help keep your school districts safe. So take it away, Kyle and Jena. Thanks, Christine. Thank you, everybody, for being here. We're so happy to have you and, have this opportunity to speak with you about AI and data privacy and data security, you know, the hot topics in the market today. So we're gonna go through this agenda today, kind of going through the background of how do we get to this place, the convergence of all this EdTech, AI, where does this play into data privacy, cybersecurity, And then how do we really get our arms around it? Can we get our arms around it? Is this something we can control? What is in our control and what isn't? And, giving you some tools to help guide this really, exciting but also, you know, pretty risky time in in EdTech's history. So with that, going into, you know, the history of education and overall, like, EdTech landscape, you know, I have had a catbird seat in this position for the last nine years. I founded CatchOn a while back, back in 2016. That's how I met Kyle. I was one of my first first customers, first partners. So blessed to be working with Kyle and other districts who have really had a shape in how I have grown in this space. But, you know, I've gotten to watch and witness watching the real usage of these tools and, you know, from pre COVID, you know, the reluctance of technology, the excitement, but yet, you know, kind of still the hesitance in the classroom, to COVID, you know, the the necessity of technology. And now we've kind of gone into this post COVID motion of AI and all these wonderful opportunities, but also these these huge risks. And you can see this really interesting chart here that as it's it's no surprise to anyone that as ed tech adoption has grown over the years, so have the cybersecurity incidents. So, you know, the proliferation of ed tech, the amount of tools that are out there, the sheer volume of freemium tools, and then off, of course, these tools that have now embedded AI into their platform sometimes with us knowing it and accepting those terms and sometimes without us accepting or knowing those terms, it has created this extreme risk for us. But, you know, how do we manage that? Because we can't just turn a blind eye to it. Kyle, would you like to add anything to that and what you've seen? No. Absolutely, Jena. Thank you. So, you know, being a a CTO in k twelve now for twenty three years, we we've seen and lived this cycle of just mass adoption. And, of course, the pandemic, you know, really exploded that. And a lot of folks as we look and talk about, AI, you've heard them compared to, you know, like, how we had calculators or cell phones come into our classroom or Google in a sense for Internet and this the fear factor of that. But one thing with this mass adoption that I think is a it's quite different, of course, with AI is how quickly that adoption rate has happened. You know, overall, you might have heard some stats, you know, from when you look at just 50,000,000 users in the AI realm, how long it took, let's say, for the airline industry to get 50,000,000 users. That took sixty eight years. You know? And then you look at things like just the Internet in general to get 50,000,000 users. That took seven years. Facebook to get 50,000,000, took three years. For Generative AI, five weeks. The adoption rate is just tremendous. And to keep up with that is those of y'all that are are also directors or coordinators or chiefs in in districts for technology to keep up with that rapid growth, like Jena mentioned, the freemiums and AI being embedded, the ability to keep our arms around that is getting very, very difficult. And so that's where we're looking forward to talking and having that discussion today with you to talk about how we can help leverage that protection and give us a little more insight as we're responsible, of course, for the safety security of our kids and also the data privacy. So, Jena, let's kinda talk about what we're looking at. Yeah. Absolutely. You know, some of the stats around why we can't say no. So, you know, looking at the overall broader workspace adoption. So looking at the trends of what employers are expecting from students as they graduate college or maybe go into the workforce straight from from high school and to the trades. You know, every single profession, we are going to see a dramatic increase in AI. I mean, I can raise my hand. I use AI on an absolute daily basis. And I'll tell you a funny story when we get into the product later about how I've locked myself out of AI. It wasn't a good thing, so my marketing team could attest to it. I I did not have access to my AI. I was not happy. But, you know, we use this for efficiency. I mean, it's not all bad. There are so many wonderful benefits that, you know, a a smart mind and our, you know, innate skills coupled with these tools to help us do things a lot faster and accelerate processes is is a really beautiful thing. So we can't say no to this because I feel like if we do say no, Kyle, I mean, we're we're indirectly, directly hindering the students' ability to be successful and to give themselves a leg up in the future whenever they're putting their names on resumes and trying to apply for jobs. They don't have those critical skills starting at this early age. It's almost the equivalent of not being able to, you know, use word processor back in, you know, thirty years ago. Absolutely. You know, it used to be, people would always say for us in education too, we used to stand on, you know, preparing students for careers that don't exist yet or jobs that don't exist. Yes. That's a part of it, but AI is gonna be embedded in jobs that are existing today, folks, Jobs that have always been around are adapting. So it's not about just future proofing. It's it's getting our students ready for the world we live in today. And so, you know, it's gonna touch every aspect of business, career paths that are out there, and teaching the foundational aspect of how to use it is is critical for us where we get our our students and our staff and everybody to use it the appropriate and safe way. So who who actually controls the data? I mean, we've seen for a long time, you know, with with CatchOn, my previous life, you know, we had a really big hand in helping school districts be able to, identify the tools in use and then figure out if those tools had privacy policies and make sure that the tools in use were the ones that were intended to be used. But there still leaves us, you know, vulnerable to a privacy policy, a a document, something that the vendor is telling us that they're doing. And sometimes those get updated very frequently, and sometimes they don't get updated at all. Kyle, you were telling me a story the other day about the AI tools and and just how your tools can you tell us why you tell us why? No. I mean, you know, hopefully, as in your districts and organizations, you look and you do reviews of software packages, you know, before you onboard them into your system. Well, nowadays, we're having to really revamp that process and do more reviews over and over of our products because AI is becoming adapted into products we've owned for years. That product has maybe gone through a tech vetting process last year or the year before. We're having to do that all over again because of that element of AI that's embedded. And a lot of times, when it comes to who really owns that data or who's controlling that data, it's an add on within that company that's going to another AI engine that's somewhere else. And so you have to keep looking at and evaluating those terms of service, what's being added in there. Because I'm sure every product you're seeing nowadays will say some sort of usage of AI is in their product, and we have to reevaluate and keep checking on that to see what level of AI is being used in there and what control methods do they have upon that. Yeah. Yeah. Absolutely. And and then eventually, it just comes down to what data can be shared with that AI tool. Right? I mean, if I'm just typing in silly things about, you know, cats or something, it doesn't really matter. But if I'm starting to put my PII information in there, whether it's knowingly or unknowingly, then that that's what creates the risk. Right? Absolutely. Okay. So let's go ahead and stop for a poll really quick, guys. Alright. What are your biggest barriers preventing your district from improving cybersecurity? Don't be shy. No one knows who you are, so you can be as honest as you want. So, again, you'll notice on the top right of your screen, the polling area, if you need to click on that. Getting some votes in. Okay. Okay. I know very, very common theme you see kinda showing up in some of these, and we've we've seen across the industry as well, you know, the lack of in house ability for cybersecurity experts. A lot of folks, we just don't have the the staffing ability or we can't find appropriate staff. A lot of times we can't afford, appropriate staff to have cybersecurity experts, within our organization. So we really need to leverage other, tools and resources to be able to do that. And if you're not living in with unlimited budgets, you're very blessed right now. Right? Okay. Awesome. How do I get back to those slides, guys? There you go. You know, it's always on these on these tech talks that that we lose lose lose our, our access to things and have tech problems. It's always the irony of these. Right? Make sure. Okay. So we're forward. So, Kyle, there's no one who speaks better about this than you can. I mean, I can obviously give my opinion of this from a vendor's perspective and what I've seen in my past. But, you know, why is we you hit on a really important point. You know, you you talked about these applications that you have to re vet. You talked about how EdTech has grown and then 50,000,000 users of AI in the last you know, it started with five weeks of adoption, you know, five years of adoption. So why is allow and block or, you know, white list, black list? Why is yes and no no longer, you know, an option for us anymore? Absolutely. You know, with with the integration of AI into into every facet of of learning ability, It's much like the foundationary skills of, you know, teaching a how to use a calculator, how to use the Internet in general. If we if we take that stance of just straight all out blocking I don't know about any anybody else on the call. If I was to all out block anything in my district like this, the students are gonna find a path there. They're gonna find a path to be able to figure out how to use the tools to help them in different ways. And so we need to be able to not just block all this content, but teach the appropriate ways to use that and build up the responsible use because that's gonna be a part of their everyday life when they get out there. And having access and understanding how this stuff works is so important. So I know a lot of us I I've I've done it myself too. When a lot of these things first start coming out, our first reaction is block. Just block and sit and wait. And this is not something that, fortunately, you're gonna be able to do, especially with the proliferation growth of AI. It's it ending up on everything. You're gonna end up blocking everything if that's the way you're gonna go to prevent access to some sort of AI tool overall. So, really, it comes down to we need to have, you know, just different kind of approach and way to look at, okay, we're gonna be able to have we need to have access to these tools. How can we create an insightful way to being able to monitor, guide the usage of this in a protective fashion? So I think that's what we're gonna kinda talk about and show y'all and have some conversation on how we can potentially look at that in this ever changing world in AI for us. Perfectly put. Now that's exactly that's exactly what drew us to, you know, create SecureClass. So, you know, we're looking at this product that we've created, a a browser and data security platform, a a tool that can really be used to help secure students, to protect them, to make sure that when they're learning, you know, in the browser, on the Chromebooks, that it's extremely secure from the threats that are most imminent in the ways that they learn. So, you know, that's that's in the browser. That's the websites they're going to, the the URLs. That's the, SaaS apps. That's the web applications. That's the SaaS apps that have AI embedded in them. That's Gen AI in a nutshell. All of these different sites have the ability to have, you know, threats happen to them, whether that's through malware or phishing, emerging threats. These bad actors are using AI on a daily basis. Now I'll share some stats with you as we progress through the presentation, but it's it's alarming how many threats are out there in just the browser itself. And then we've always talked about in cybersecurity, you know, protecting the servers, but protecting the servers does not translate to protecting the students. So what what can we do? What can we do to help enable them to adopt these really innovative tools and give them the abilities to experiment and play and learn and grow through these tools, but also making sure they kinda protect them from themselves as well, teachers and students, making sure that data is not shared in a way that we do not agree with or that that don't meet our requirements and policies. So that's what we're calling data control. Kyle has hit on this quite a bit about the, you know, block versus unblock or yes versus no. It doesn't have to be binary. What if you could adopt AI but just set certain settings, some conditional settings that allow that tool to not give you the same risk of data sharing as it does in its current, you know, basic form of, say, chat, GPT, for example. So that's what we're hitting on with data control is the ability in our tool, you can accept that your students are going to use AI, but we can allow you to block copy and paste, upload and download cameras and microphones. So this starts the process of really giving you the ability to say, I want to adopt these tools, but I'm going to make sure that I am setting my district and my students up for success so that they can't download data out of other tools, student data, you know, reports, different information that has PII in it, and then upload it to a prompt to create, say, a report or, say, a chart or something that is seemingly, you know, harmless, but it's exposing their data and potentially other people's data to risk. So that's really where we are so excited about this tool is that, you know, it's been yes and no for so long in EdTech, but AI needs to be approached in a very different manner. And then, of course, with, you know, our overall product secure class, we're preventing cyber threats in the browser as well with our browser security. So we're utilizing AI. We're utilizing our own machine learning, and we're looking at all the threats across the entire world. We've got a massive database of over 8,000,000,000 endpoints, and we're watching these threats as they are suspicious events so that they don't become, problematic for you in the future. So anytime a URL has any kind of suspicious activity, we go ahead and block it with our extension in real time, and we unblock it proactively as we get information that allows us to feel that that is no longer a risk to you. So, you know, we've blocked over 18,000,000 websites in just the last ninety days for suspicious activity around malware and phishing. I mean, this is no easy task. But, again, securing AI and then using AI to secure is is all part of what SecureClass's underlying mission is. Absolutely. And I think, you know, a lot of great things within that. As as we know, the the cybersecurity aspect of it has increased tremendously from students or or others being able to use AI to potentially present code, create code to run against you and just do other malicious tactics to where they can have AI make these and then, you know, cut and paste it out and apply it into different tools to to use against your system. But the other thing that, you know, Jena is stressing on there, you're hearing is browser based. You know, the majority of our students out there and probably across the country, you know, are using Chromebooks and are are living in a browser based type of environment. And so being able to have that control inside of this ability to monitor and control the risk factor for our kids and on the Chrome side is something that really hasn't been touched on too much yet out in the industry. And I don't mean to go on a tangent here, but, you know, we we keep hearing about these tariffs and and all these rising device Kyle. But, Tim, you hear these all these rising device costs, and I think Google has done a wonderful job supporting districts in the post pandemic world of, you know, ESSER funding drying up that these devices, lifespans are are intending to be longer. I almost wonder and feel like, Carl, I'd love your thoughts on this, that I've talked to many districts that say they don't secure their Chromebooks, that Chromebooks are inherently secure. Like, do we need to do anything for Chromebooks? And I almost wonder if it was because we adopted them in COVID at a time of crisis that we didn't really feel like they were maybe our long term solution, but now they're becoming our long term solution. Yeah. I think so. And as well, you know, they're always we're kinda additional adoption as, you know, there's a student based type of a device, this low level, usage device. But now, you know, Google and Chrome is is really, expanded under, you know, pro and just different levels. And, I mean, listen, 99% of what we do is all web based, so we can use a Chromebook. So it's not just students that are using these devices anymore, and we have to look at a bigger level of control to be able to to help in those things. So I think that has really, added on to that process because of the the footprint that it's had, and it's becoming, you know, like anything. The more usage you see, the more targets and threat actors are gonna start trying to find ways to get to it. Oh, that's a really good point. So you're you're saying that because Chromebooks are being used more and more that the the threat actors are gonna come around and be like, hey. I mean, we're not gonna look at those windows anymore, your servers. And then we're gonna go this way to the browser. Yeah. Especially potentially. You know, a lot of districts and stuff, we focus a lot of hard training and a lot of specific training onto our staff. We try to do it with our students, but it's blended into instruction in different ways. So, really, any entry point that a cybercriminal or something could make into your your system is a way they're gonna find in. And let's be honest, finding that path in through a first grader on a device might be easier than finding it in through the CFO. So any way we can have some sort of level of control in those areas is critical. That's a really good point. And and that's what makes me so happy about this. I'm sharing with you guys, you know, firsthand. This is this is right out of our product. This is what you see when you can set these settings. So for our data control, we've talked a lot about AI. You can do this at a site by site level. You can apply these policies by the user, by the OU, by the entire organization. Maybe teachers have different settings than, students do. We talk about AI, but there are other tools you can you can apply these conditional settings to to make sure that, you know, the EdTech tools that you really want to use, don't have data sharing that you don't approve of. And so we're only gonna be growing these features in the future. Don't be surprised if you see, you know, more data control settings, you know, coming up towards ISTE and promoted by our our dear friends at Tech and Learning, but I'm really excited about this. I feel like we're in a world now where, you know, school districts need more options. Technology is not going away. Edtech is only booming. And to give you more abilities to say yes to innovation, but also without compromising security is so important to me. And to be able to do this oh, and and, Kyle, let's talk about this. What what options are out there for cybersecurity? Because I feel like for Chromebooks, there's very limited options, and there you wouldn't utilize the same tools. You don't use EDR, MDR, SDR in a Chromebook, or you couldn't afford to at least. Yeah. Absolutely. There's not that many options out there or or some of the options that are. I mean, the price factor to apply to all my student devices is just not something that's attainable nowadays. And, you know, as we're looking at, like you see on the screen, you know, the different layers and our ability that what we can do is we can phase in that approach and adoption usage. So, you know, the younger kids, the more restrictions are not, blocks, and then ease that up. We take the training wheels off, so to say, as they get more inherent into the AI world and more, you know, understanding of the information they're getting in and out of that. But there really isn't a great cost effective tool at this point that is really addressing this AI aspect of it in the marketplace. I need to be so happy. So we've talked a lot about the the product itself. So just, you know, the overall, you know, four things that make us really unique. So we've got the advanced threat prevention at the browser level. This is highly unique in the space. We're doing it at an affordable price. So we feel like it's affordable. You know, we're new. I I can't wait to learn more and and really test the the market. But, just as we did with catch on, like, I feel like we're at the right place at the right time. I feel like we've got a really timely solution for you with customized data controls. You know, we have the automatic data sharing protection, so you can really control how you want those tools to operate. And you can also do it at a category level. So that's coming here in the next two weeks. We're excited about that. And what that means is for a GenAI category, instead of having to specify what tools, we will be utilizing our database and our AI and ML to be able to identify all the new tools that are coming out there in the space that you might not even know about. And those conditional settings that you apply to GenAI will automatically be updated to encompass all of those sites that added to that category. And then, of course, the data sharing awareness alert. So what's cool about that I feel like this could grow. I'd love some feedback on this, but, there's kind of a training aspect too is when you do set these data control policies, if someone so, for example, me, when I set a data control policy and I was beta testing this on ChatGPT, I set it for myself and I tried it and it popped up and said, you can't do this. This has been set by your organization. I feel like we could even grow that into having more targeted messaging for students that kinda tells them why that's not good. Absolutely. I think that that's the key, you know, the the awareness and the alert notification. Because, I mean, like, with with any student, I mean, my own kids at home, you know, you tell a child no or some or even an adult no, the first thing in their mind typically is why. You know? Or I'm gonna find a way around that. And so for our ability to really just not all out block, but be able to send something there to say, here's a little bit about why. It's not, you know, we're not, you know, being this gatekeeper. We just want you to be aware of the information that you're trying to share, how that could be dangerous, and so forth. So going beyond just straight blocking or preventing access to educational is what's so important for us because, again, this is a tool that we're gonna be living with the rest of our lives. Everybody's gonna be engaged in some form or fashion with some sort of AI. And so the way to properly use that is what's key in this process. I love that. We've already talked about fighting AI with AI. I mean, it's important. I'll leave this here. I know you guys will have the slides, so I'll just keep progressing through. We're delighted to share this product with you. I'm so excited. You know, we launched the public beta of it at Kosen. Had a nice little launch party. It was a lot of fun for those who were at Kosen. You know, really enjoyed meeting all of you and, you know, look forward to sharing this tool with you. So my team, will you please put in the docs, you know, what what's next steps here? Like, I am more than happy to have a demo with anyone who's interested, be able to set some time aside, show you what the product does, understand how it might be able to fit into your environment, learn more about your school district. And then by joining this webinar, you also have first access to this tool. So our early defenders program is open, and that means that anyone can sign up. We are being a little limited in it because we don't want to overextend ourselves, but, you know, we're really excited to show you what this awesome product is doing. We've been running it in our organization for the last three months, learned a lot, been growing, been testing with people like Kyle and our other advisors who've helped us really shape and improve our implementation process. And, yeah, it's like I I mentioned to you. Like, we're seeing so many threats that are out there, and we're blocking them in real time. So we're not providing you more alerts, but we're, you know, we're taking on that burden for you, and we're just automatically blocking those URLs that are suspicious before they can become problems and then giving you the tools to be able to customize AI settings to meet your needs. Yeah, Jena. I saw a question come in. So, you know, we we talked a lot about, how this can work on Chromebooks or in Chrome in general. What about security classes' ability to work on non Chromebooks or in other browser types right now? Sure. Sure. So we're we're device agnostic. We are growing. We're growing there. So we started with Chrome. We do have Edge and Firefox, but those are still being tested right now. I would probably say those would be ready mainstream probably by ISTE. So one thing when we're talking about the adoption, you know, in one of the other slides, we're showing, you know, the the ability for the lessening of that burden on IT staff. So, again, as we're piloting and working in through this product test, a big thing, as we all know, is we don't have time for staff to babysit any other product or have to touch thousands upon thousands of devices to get something to work. So the ability to have the way that Securitas works to be able to set up and just go out and zero touch impact our users and not have to have some sort of intervention to be activated, on all the devices, of course, is huge, of any district, any size out there. So that that's one thing that, you know, is always mindful for me when we're working on new products is how am I gonna even manage getting this out there? And then what how am I gonna manage updating and all those kind of things for us? Another question that came through was, you know, will the present presentation be able to be shared out to the group? And I believe so. I think, we're gonna be sharing out the slideshow, with everybody that signed up, and there will be also recording of this as well that I believe you'll get in an email following up that you can reference back to. Alright. Has my team shared the link of where you can sign up for, the early adopters program and early defenders program? And that won't necessarily mean that you're you're signing up and you're saying that you're ready, but that will at least trigger us instead of meeting with you and and learn more about your district. So another great question just popped in that I think is very relevant. The question is, does this product work with other business models, say health care? And, I'll say from my perspective, I'm like, well, absolutely. You know? I mean, the importance of that in the health care model set, this isn't just a k 12 problem. Right? This is an industry problem. So having that ability, especially on my mind as I'm sitting here thinking about in health care, PII, and just other type of information to be shared in and out to where you can set different restrictions level on different type of users, abilities, and so forth within your organization. That was just me getting excited about it. I'll let Jena share. Hopefully, I'm spot on with you there, Jena. Yeah. No. You're totally spot on. I mean, that it's absolutely, you're seeing enterprise browser security becoming a really big trend in, you know, the enterprise market, but they're very expensive, and they require you to use their browser. And what we're doing is we're saying, we know you don't wanna use another browser, so we're gonna secure the browser that you use most. And, of course, we started with Chrome, but we will be growing and are growing into other browsers. Chrome, Edge, Firefox, Safari will all be part of our suite. But get start somewhere. So we wanna do everything with Fidelity. Right? So so Chrome is working extremely well, and we're we'll be growing into the other browsers in the in the very soon in the future. I think that's a key point because a lot of products you require to use, you know, their their specific type of browser or something to secure. And what what's an important factor with SecureClass on that is we're meeting the students where they're at. We're meeting us where our users are at in the environment in which you live to secure you within that environment, not making you adopt something separate. As we know in education, to adopt a different type of browser or different things, all the other ramifications that come with my programs on that. So being able to seamlessly incorporate into something we're already used to working with and managing is very important. Absolutely. Absolutely. Anyone have any other questions? I mean, we we we'll answer anything. They don't have to be on topic if you wanna have a a discussion around cybersecurity or AI. They don't have to be about SecureClass specifically. You've got a captivated audience here. Just leave that out there. Well, Kyle, I'll I'll I'll end it with this. I, you know, I've been so blessed to work with you with CatchOn and now now SecureClass. I'm very curious. Will you share with with everyone what drew you into SecureClass? We didn't get to work as closely with with CatchOn. You, you know, kinda came on, like, I guess, a year in. You're still, you know, Cachon champion, helped really shape the product, but you're into the ground floor now. You've seen all the good and the bad. Yeah. That's what that's what's exciting, you know, being able to work with with Jena and corporations like this is, you know, being able to get in and working to make sure their products work for us in education and that there is that voice and aspect of how we're using products, how we need to manage products instead of solutions that are built, you know, from what others think and then kind of forced down upon us into education to try to figure out how to make it work. So being able to have those types of discussions back and forth, what are the true pain points that you're feeling? What are your end users' pain points that you're feeling? I mean, I have lots of advisory groups within my district here down to, you know, students that are in elementary level and talking with them. What are their their obstacles that they hit? And some of it is around AI that we we can't open up everything yet, but we have some open. How can we find that happy medium to it? And so that's what really drew us to the ability of listen. We gotta be able to more broadly adopt AI usage, but in a controlled fashion, and then have these other controls that we haven't really thought about. I'll be honest. We didn't think about, you know, making certain sites not be able to use, their web camera, for example, or maybe access the microphone. Yes. Those additional steps that can happen, but be able to put that data control set to really control it at that level is not something we've really looked at prior. And nowadays, with as we know, new sites are popping up as we as we're talking now. New websites are hitting the market that are doing all these types of access. So for us to be able to have an overlay to protect that first is critical for us overall. So one question, can we talk about how the implement implementation support model looks like, Jena? So how does it look like when I sign up? I I can talk about you know, from our perspective, we've been working very close on implementation as we're out and testing and really getting this going. And it's been a pretty seamless process, to be honest with you. Working from getting the initial setup and then deploying out, so to say, through our browsers has been a pretty easy process working back and forth with the team at Autopia. But, Jena, can you talk about kinda what the implementation would look like? Like. Sure. Sure. So so we'll set up a meeting with you guys. The way that it looks is if you want to do early defenders, we've signed so many let's start with data privacy in general. Data privacy wise, we have so many different agreements through SDPC. We're a proud member. We probably have one in your state. If we don't, you know, bring it to my attention. We'll go ahead and get that straight straightaway first. But then the implementation is pretty seamless. We jump on a call. We set up your portal access, so that's where you'll see the data. That's where you see the pretty dashboard that you saw a couple of slides ago. That's where you'll see all of your, you know, your your threat data. And then what we do is we, give you an install guide, so you'll push out the extension. It's a very simple, seamless extension. You push it out. We typically like to start with a smaller group. Maybe it's your admin or, like, a small OU, show you that it works well, that it's zero touch. And then once you see that it's it's good to go, you can push it out to your whole organization. And then, we'll we'll set up some, data reviews with you. This is something we used to do with with CatchOn. Worked very well. But I like to walk you through after the first week what the data is, what you're seeing, how to navigate, how to set some of the policies. But it's it's very simple. The UI is extremely easy to use. It's intended to be easy to use, and it I hope it's it's really insightful. It's it's gonna show you all sorts of really cool global data across what's happening across those 8,000,000,000 endpoints we talked about, and then we'll also show you the threat prevention data based on your or own organization. You know, what did we block specifically for you that could have become an event had we have not been there? So it's it's always really eye opening, that that first, data review. So that's Absolutely. About about a whole week of of time at the most, you know, as long as you wanna work with us and you're you've got the time to set aside. It takes no longer than a week to get you from point a to to point z. Yeah. I think one thing that's important, a question came in, is that, you know, SecureClass itself is new as we're we're into kind of bringing this out, but Itopia themselves is not a new company. You know, so, Jena, can you talk a little bit about that? You know, was, you know, how long has this company been around? You know? And what what in general is the client base of of iTopia? And I really appreciate that question, especially when we're talking about AI because it seems like every day, there's some new company that's AI something. And you and like you asking, you know, I'm like, well, where did you come from? So let me know a little about Itopia and how it gotten to this point. Yeah. Absolutely. So Itopia has been around for over ten years, but in the education space for just under four years. And so the company started off by providing an enterprise level, VDI product, so virtual app streaming, to Chromebook. So, you know, the main the main thing that we started off doing before I even existed here and our cofounders created was during COVID, to be able to help school districts be able to run those really heavy CTE applications straight from a Chrome browser. So, again, leveraging the Chromebooks, being able to help maximize those Chromebooks to run those CTE, applications, those really heavy applications that typically run on a Mac or a PC. So that's our main solution, is cloud apps. It's been adopted by over 100 school districts across the country. We have some large ones like Houston ISD. We've got lots of case studies, with with various school districts anywhere from, you know, five students to, over 50,000 students. But, you know, we we've got some some large customers and some small customers, but we've been working with about a hundred school districts over the last, about three years. And then SecureClass kinda came into the mix because, as I mentioned to you, you know, enterprise is really going into this enterprise browser mode, and enterprise is trying to do away with VDI. And so we now have a solution that is a complement to it. Whether you want to stream applications from your Chromebook or you want to secure your web and SaaS apps, that's where SecureClass comes into play. Absolutely. So definitely not something that just spun up here in the last month or two, kind of bring something in AI wrapper wrapper tool or something up Yeah. Into the market segment. Another question came in asking about, what does Securitas, I mean, Atopia in general, Are you compliant with all these different various student data privacies on this one particular was, like, Connecticut? What area of student data privacies? Do y'all have any kind of seals on that, or what what is your qualifications, I guess you could say? Yes. So we we have the mecca of all seals. We have SOC two compliance. As I joined Itopia, I know I've worked with SDPC for a long time, so we've recently signed the tech agreement for, I believe, it's, like, 11 states, like New York, New Jersey, Connecticut, all all the different, you know, New England area states. We have it in California, Texas. But like I mentioned before, like, if you go in the SDPC dashboard, you probably will see an agreement that you can write into the exhibit e on. If not, we can always, we have we've signed the NDPA. If we need a state specific exhibit, I'm happy to review that, and, more than likely, we we probably need it because we are SOC two compliant. We're also a NIST one point one compliant as well. We do not have one EdTech official approval, but I worked very closely with Kevin over there, for another one of our school districts, and I had to become, I had to meet all of their standards. So we do meet all of the standards of one ed tech as well. We just don't have the official seal because we're not an official member yet. Absolutely. So I see Michael, had a question in. You know? He was asking, is it Mac compatible? Of course, Michael, this is this browser based, so it's not dependent upon the, the OS of that operating system in there. So different browser capability inside a secure class. You know, as we we look at and continue to adopt AI, it it's something that I think we all need to reflect on is how are we ensuring that we're not losing, you know, control or knowingly to our students being able to use these third party data tools and what information they're putting out there. We can do all the education. You know? We can do our digital citizenship. But what other guardrails can we put in place to protect our students and ourselves? So that's why looking into and and understanding products like this is is so important because it's gonna give us that lens and ability to add that extra layer of security. Because, I'm assuming I'm not alone in that fear of we train, train, train, but it might still happen and probably does. But now it's unknown, making it more of a known factor so that we can get in front of it. Let's see what else has come through here. Yeah. So one of the questions was when you mentioned your SOC two, SOC two ability. If, somebody just wanted to adopt your product would be able to see your your SOC two report, you know, what you've been listed as and those kind of information. Absolutely. Absolutely. In that sense, you know, that's an important thing to always ask because sometimes people always say they have these things. It's always good to ask a little more, because sometimes as we know, people might do stuff just to check a box, and we need so much more than that. So I I really appreciate that question and reaching out saying how can we kind of really see that compliance level that you have out there. Absolutely. We'd be happy to share that. And I'll tell you, when I first joined Itopia, I was thrown into the gauntlet. They're like, by the way, we just started SOC two compliance. Welcome. I was like, oh gosh. So I had a I had a I had a firsthand in in going through this process, so I'd be more than happy to share with you. Thank you for that question. Yeah. So, again, everybody that's on, you know, you have the ability to to jump in and kinda be in the early defender ability here. So, you know, take advantage of of using the QR codes or the link that's on the screen there and and get in and just kinda see what that looks like for your district and how that can play a part in helping secure the AI environment for your staff and students overall. Even if you're not in education, like, we've had that question, does it apply elsewhere? You know, let's let's see what we can do and be a part of this, evolution of protecting our students in the AI AI environment in which we're living. Well, I think we, have some extra time. So I will give everyone their time back today. I was just looking at what time it was. Christine, do you wanna wrap up for us? Sure. First of all, thank you so much. That was so helpful. And this presentation will be, made available, so especially these QR codes. If you didn't get a chance to scan them or you wanna share them with your colleagues and share all this great information with your colleagues, we will send a, the recording out, after the event. So thank you both. That was a terrific conversation. Thank you all for your questions. Thank you to Itopia for sponsoring this conversation. And, now, it's my pleasure, drum roll, please, to announce the winner of our $250 Amazon gift card is Jenny Sassemin from New Jersey. So congratulations to Jenny, and thank you again, for all of you to be be spend your afternoon with us here. And thank you to Itopia and to our speakers, and have a wonderful rest of your week. Take care.