Category Archives: Education

8 Tried and True Edtech Tools to Try in 2018

Steve Dembo on episode 222 of the 10-Minute Teacher Podcast

From the Cool Cat Teacher Blog by Vicki Davis

Follow @coolcatteacher on Twitter

Sometimes the best tools have been around awhile. Steve Dembo @teach42 talks about the tried and true tools that teachers should still use.

8 edtech tools to try in 2018

Richard Byrne, author of Free Technology for Teachers has several online professional development options  to check out: GSuite for Teachers, Teaching History with Technology, and Practical Edtech Coaching.

See all of Richard’s Courses at http://ift.tt/2lomeMO. Richard is not a sponsor of the show, however I am an affiliate.

Listen Now


Enhanced Transcript

Tried and True Edtech Tools to Try in 2018

Link to show: http://ift.tt/2CcldyH
Date: January 2, 2018

Vicki: Today we’re talking with my friend, Steve Dembo @teach42, coauthor of Untangling the Web. He was one of my first blogs that I read, and first podcasts I listened to.

Steve, today for Ed Tech Tool Tuesday, what are some things that people need to try in 2018? Do we need to always be doing the new latest and greatest, or are there some things that maybe we might need to dust off?

Why Time-Tested Tried and True Tools Are So Useful

Steve: Well, I think it’s interesting, because a lot of times when people go to conferences, they’re always seeking out, “What are the ones I haven’t heard of before?” They’re looking for something new and shiny and sexy and so on.

But the reality is, the new ones are sometimes the ones that aren’t necessarily well established, that don’t necessarily have a good financial plan in place. They’re the ones that you can’t necessarily depend on still being there Monday when you want to start using it with students.

And yet, there are all of these great tried and true Web 2.0 tools, or online technologies that not only have a firm financial plan in place or they will withstand the test of time, and they’ve actually been well developed over the year, with new features and so on.

I think sometimes people — instead of focusing on what’s new and what they haven’t seen before — they need to be focusing more on making better and more effective use of the ones that are well established.

Vicki: OK, give us some of those well established.

Tool #1 Padlet

Steve: Well, I’ll take Padlet.

I think Padlet is a perfect first example because everybody kind of knows what it does. For a little while, everybody was talking about it because it was the greatest, newest, shiniest thing.

And yet nobody really talks about it much anymore. There’s an entire generation of teachers that aren’t familiar with Padlet because nobody’s evangelizing it anymore.

And, they have done a phenomenal job of upgrading it over time, of adding more educational-friendly features, of adding things like commenting, adding new layouts and columns and so on. So it can function sort of like a Trello, where you can have upvoting ala Reddit, making it a lot more interactive and kind of changing the nature of the way these Padlets can function so that it can fit a lot more needs.

And yet, a lot of people think, “Oh, I’m familiar with Padlet, or I’m familiar with Wallwisher,” (note: Wallwisher is now Padlet) and they don’t take the time to go explore, “What can it do for me NOW?”

Vicki: OH, and it’s such a fantastic tool to use. I don’t know why we keep thinking we have to use what’s new instead of using what absolutely just works, rock solid.

Are there any other rock-solid examples besides Padlet?

Tool #2: VoiceThread

Steve: You know, it’s funny because there are some that are very, very solid and dependable, like VoiceThread that haven’t necessarily evolved all that much,

Tool #3: WeVideo

and then you take others like WeVideo that have just done and an even better job of establishing really great business plans.

You know, they’re making most of their money on the personal accounts, on the business accounts, on the enterprise accounts and so on, which means that they can offer educators even more features for free and they keep on adding things in there, too.

One of the things that they added recently that I love is this “motion graphics” element. It’s basically like an after-effects, in a sense. And you can do some really incredibly brilliant and subtle things in it. If you really want to get creative and push the envelope, you can do some really mind-blowing green screen type things with the motion graphics. It’s one of the most full-featured video editing products out there, and considering that it will work on a Chromebook is just amazing.

Vicki: Yeah. It brings video in the reach of everybody, doesn’t it?

What else do you have?

Tool #4 Kahoot

Steve: Well, let’s see. A lot of times what I like is these ones that are doing consistent development. They’re listening to users and really putting in the features that the users are requesting and wanting to see. Kahoot has done a very nice job of that.

Tool #5: Sutori

One of the other ones that has kind of flown under the radar is a site called Sutori. Sutori has now been around for about I think a year and a half, maybe even almost two years. It kind of defies definition. It’s sort of created its own genre.

But what I really love about it is that they’ve got new features that are coming out every two or three months, and they’re all in direct response to the things that educators have been asking. That’s one of the things I demonstrate when I show this in presentations.

A lot of times people don’t really think the developers want to hear from educators, or that it’s going to have much of an impact. What they don’t realize is that a lot of these online ed tech tools — they’re teams of three or four people. The people who are answering the support questions are the same people who are doing the primary development on them.

So when you say to the support person in the chat room, “I’d like to see this feature,” or “If you did this, then I could use it with my students,” you’re talking to the people who can actually make that happen! So that’s another one that I’ve become a huge fan of.

Vicki: So Kahoot obviously helps us do quizzes, and our students can make them, and that’s awesome.

So Sutori… Is that really more for vocabulary? I haven’t used it.

Steve: No, it’s sort of… a way to sort of publish stories but in a sort of linear fashion. It’s sort of like a timeline, but it’s not a timeline because there aren’t necessarily and numbers. It almost defies definition, but it’s a way to publish something almost like a blog except that it is actually interactive. It can be collaborative ala Google Docs style.

If you’re not familiar with it yet, you should definitely — if nothing else — go to the website and look at their gallery. Their gallery has an excellent selection of great examples that would appeal to educators. One of the other nice things about it is that you can take any one of those, copy it to your own account, and use them as templates and just modify them to your heart’s content.

Tool #6: Wordle

Vicki: Now, before the show, you were even talking about Wordle. I mean, how can you explain that? That’s such a powerful tool, and I use it all the time with my students.

Steve: (laughs)

Wordle is sort of my litmus test. Now Wordle hasn’t changed one iota from the very beginning, which a lot of people can appreciate because we all know what it’s like when you pull it up on Monday with the students and all of a sudden it looks completely different. Wordle’s not going to.

But what I find ironic — that sort of encapsulates this whole problem of people only evangelizing the newest items in the tech scene — is that as soon as everybody’s familiar with it (and when I say everybody, I mean the people that are hanging out in Twitter chats, the people that go to ISTE, the people that go to the affiliate conferences) as soon as everybody knows about a web tool, most of those people stop talking about it, they stop blogging about it, they stop sharing it in presentations.

The net result is that when I go into schools and I talk to teachers and I talk to educators in general, I would estimate that more than half of them haven’t heard of Wordle. Most of them just have never even seen it, because no one’s taking the time to share it anymore because it’s not new to them.

Tool #7 & 8 WordPress and Edublogs

It’s sort of the reason why it doesn’t seem new and sexy to talk about blogging or to evangelize blogging anymore or show people how to use EduBlog, or how to use WordPress. And yet, you know what? There’s still a need for it.

Vicki: (agrees)

Steve: It may not be the newest and freshest thing in the world, but there’s still this whole generation of teachers that didn’t get the same exposure to it and haven’t had the same journey that we have.

Vicki: Well, when I do my “Fifty-Plus Tools” presentation, I always show how you can go on Wikitext and you can pull out, say, the Emancipation Proclamation, and you can put it into Wordle, and you really frontload that vocabulary. It’s such an important teaching technique, whatever you’re teaching, particularly if the subject you’re teaching is on public domain, and you can pull the text out and put it in there. It’s just a fantastic method.

So, Steve, as we finish up, what kind of inspiration do you have for educators who feel overwhelmed by all of this ed tech, to get started and try something new?

Inspiration for Overwhelmed Teachers

Steve: (laughs)

Well, the first thing to keep in mind is… I love doing this exercise during a presentation… I ask people to just raise their hands if they feel like they’re behind the technology curve. And nearly two-thirds to three-quarters of the audience will raise their hand.

The reality is that every single one of those people — just by being at a tech conference, by listening to podcasts like yours — you’re ahead of the technology curve. You’re far more tech-savvy than most other people, most other educators that are just… I don’t want to say just punching the card and going through the routine… but who aren’t necessarily seeking out new sources of professional development.

So first of all, I strongly urge people not to be so critical of themselves. But then it’s the traditional, “You have to make the time to do it.” There will never be a time when you say, “Boy! What am I going to do with all this extra free time that I have?

Vicki: (laughs)

Steve: It just doesn’t happen!

Vicki: No, it doesn’t.

Steve: So you have to schedule yourself that time. You have to build it in and say, “For this hour, I’m going to play. Because play is going to make me a better educator.” And not force yourself to feel guilty for not taking the time to play with a new technology.

Vicki: Yes, and as I always say innovate like a turtle. Take tiny little steps forward every day, because it’s about forward progress. We can all learn something new. Now I’m going to be playing with Sutori, so I’ve learned something new today.

Thank you so much, Steve. We will put all of your information in the Shownotes so folks can follow you.

Steve: Thank you so much. It’s been a pleasure talking with you.

Transcribed by Kymberli Mulford

kymberlimulford@gmail.com

Steve Dembo Bio as submitted


A pioneer in the field of educational social networking, Dembo was among the first to realize the power of blogging, podcasting, Twitter, and other Web 2.0 technologies in connecting educators and creating professional learning communities.

Steve Dembo served for ten years as Discovery Education’s Director of Learning Communities and led their Innovation and Strategy team. He is the co-author of the book Untangling the Web: 20 Tools to Power Up Your Teaching. The National School Board Association named him one of 2010’s “Twenty to Watch,” a list honoring individuals finding innovative ways to use technology to increase classroom learning. In 2013 he began serving the Skokie/Morton Grove District 69 as a member of the School Board. Dembo is a course designer and adjunct professor for Wilkes University where he serves as class instructor for the Internet Tools for Teaching course within the Instructional Media degree program.

Steve Dembo is also a dynamic speaker on the capabilities of social networking, the power of educational technologies and Web 2.0 tools, and the ability of digital content to empower teachers to improve student achievement. He has delivered keynotes and featured presentations at dozens of conferences globally including ISTE, TCEA, FETC, MACUL, GaETC, METC, CUE, ICE, TEDxCorpus Christi, #140Edu, EduWeb, .EDU and the Social Media Masters Summit. Dembo was also a featured panelist at Nokia Open Labs as an expert on mobile device integration in education.

Blog: http://teach42.com

Twitter: @teach42

Disclosure of Material Connection: This episode mentions an affiliate program. This means that if you choose to buy I will be paid a commission on the affiliate program. However, this is at no additional cost to you.  Regardless, I only recommend products or services I believe will be good for my readers and are from companies I can recommend. I am disclosing this in accordance with the Federal Trade Commission’s 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising.” This company has no impact on the editorial content of the show.

The post 8 Tried and True Edtech Tools to Try in 2018 appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!

via Cool Cat Teacher BlogCool Cat Teacher Blog http://ift.tt/2DOEYNl

Today’s news: Real or fake? [Infographic]

Today Students have a blizzard of information at the ready: on devices in their pockets, at school, in their homes, by their bedsides on their wrists… It’s almost a constant information “on” world.

Information and content floods to their eyes and ears in never-ending streams, torrents, downloads, feeds, & casts. How do they determine what is real an what is not. What matters and what doesn’t? Here’s a cheat sheet to help out.


At a time when misinformation and fake news spread like wildfire online, the critical need for media literacy education has never been more pronounced. The evidence is in the data:

  • 80% of middle schoolers mistake sponsored content for real news.
  • 3 in 4 students can’t distinguish between real and fake news on Facebook.
  • Fewer than 1 in 3 students are skeptical of biased news sources.

Students who meet the ISTE Standards for Students are able to critically select, evaluate and synthesize digital resources. That means understanding the difference between real and fake news.

There are several factors students should consider when evaluating the validity of news and resources online. Use the infographic below to help your students understand how to tell them apart.

Click on the infographic to open a printable PDF.

Media-Literacy_Real-News-Infographic_11_2017

Learn more about teaching K-12 students how to evaluate and interpret media messages in the book Media Literacy in the K-12 Classroom by Frank Baker.

via www.iste.org http://ift.tt/2yq5zBQ

The end of the cloud is coming

Viktor Charypar is a Tech Lead at UK-based digital consultancy Red Badger.

We’re facing the end of the cloud. It’s a bold statement, I know, and maybe it even sounds a little mad. But bear with me.

The conventional wisdom about running server applications, be it web apps or mobile app backends, is that the future is in the cloud. Amazon, Google, and Microsoft are adding layers of tools to their cloud offerings to make running server software more and more easy and convenient, so it would seem that hosting your code in AWS, GCP, or Azure is the best you can do — it’s convenient, cheap, easy to fully automate, you can scale elastically … I could keep going. So why am I predicting the end of it all?

A few reasons:

It can’t meet long-term scaling requirements. Building a scalable, reliable, highly available web application, even in the cloud, is pretty difficult. And if you do it right and make your app a huge success, the scale will cost you both money and effort. Even if your business is really successful, you eventually hit the limits of what the cloud, the web itself can do: The compute speed and storage capacity of computers are growing faster than the bandwidth of the networks. Ignoring the net neutrality debate, this may not be a problem for most (apart from Netflix and Amazon) at the moment, but it will be soon. The volumes of data we’re pushing through the network are growing massively as we move from HD, to 4k to 8k, and soon there will be VR datasets to move around.

This is a problem mostly because of the way we’ve organized the web. There are many clients that want to get content and use programs and only a relatively few servers that have those programs and content. When someone posts a funny picture of a cat on Slack, even though I’m sitting next to 20 other people who want to look at that same picture, we all have to download it from the server where it’s hosted, and the server needs to send it 20 times.

As servers move to the cloud, i.e. onto Amazon’s or Google’s computers in Amazon’s or Google’s data centers, the networks close to these places need to have incredible throughput to handle all of this data. There also have to be huge numbers of hard drives that store the data for everyone and CPUs that push it through the network to every single person that wants it. This gets worse with the rise of streaming services.

All of that activity requires a lot of energy and cooling and makes the whole system fairly inefficient, expensive, and bad for the environment.

It’s centralized and vulnerable. The other issue with centrally storing our data and programs is availability and permanence. What if Amazon’s data center gets flooded, hit by an asteroid, or destroyed by a tornado? Or, less drastically, what if it loses power for a while? The data stored on its machines now can’t be accessed temporarily or even gets lost permanently.

We’re generally mitigating this problem by storing data in multiple locations, but that only means more data centers. That may greatly reduce the risk of accidental loss, but how about the data that you really, really care about? Your wedding videos, pictures of your kids growing up, or the important public information sources, like Wikipedia. All of that is now stored in the cloud — on Facebook, in Google Drive, iCloud, or Dropbox and others. What happens to the data when any of these services go out of business or lose funding? And even if they don’t, it is pretty restricting that to access your data, you have to go to their service, and to share it with friends, they have to go through that service too.

It demands trust but offers no guarantees. The only way for your friends to trust that the data they get is the data you sent is by trusting the middleman and their honesty. This is okay in most cases, but websites and networks we use are operated by legal entities registered in nation states, and the governments of these nations have the power to force them to do a lot of things. While most of the time, this is a good thing and is used to help solve crime or remove illegal content from the web, there are also many cases where this power has been abused.

Just a few weeks ago, the Spanish government did everything in its power to stop an independence referendum in the Catalonia region, including blocking information websites telling people where to vote. Blocking inconvenient websites or secretly modifying content on its way to users has long been a standard practice in places like China. While free speech is probably not a high-priority issue for most Westerners, it would be nice to keep the internet as free and open as it was intended to be and have a built-in way of verifying that content you are reading is the content the authors published.

It makes us — and our data — sitting ducks. The really scary side of the highly centralized internet is the accumulation of personal data. Large companies that provide services we all need to use in one way or another are sitting on monumental caches of people’s data — data that gives them enough information about you to predict what you’re going to buy, who you’re going to vote for, when you are likely to buy a house, even how many children you’re likely to have. Information that is more than enough to get a credit card, a loan, or even buy a house in your name.

You may be ok with that. After all, they were trustworthy enough for you to give them your information in the first place, but it’s not them you need to worry about. It’s everyone else. Earlier this year, credit reporting agency Equifax lost data on 140 million of its customers in one of the biggest data breaches in history. That data is now public. We can dismiss this as a once in a decade event that could have been prevented if we’d been more careful, but it is becoming increasingly clear that data breaches like this are very hard to prevent entirely and too dangerous to tolerate. The only way to really prevent them is to not gather the data on that scale in the first place.

So, what will replace the cloud?

An internet powered largely by client-server protocols (like HTTP) and security based on trust in a central authority (like TLS), is flawed and causes problems that are fundamentally either really hard or impossible to solve. It’s time to look for something better — a model where nobody else is storing your personal data, large media files are spread across the entire network, and the whole system is entirely peer-to-peer and serverless (and I don’t mean “serverless” in the cloud-hosted sense here, I mean literally no servers).

I’ve been reading extensively about emerging technologies in this space and have become pretty convinced that peer-to-peer is where we’re inevitably going. Peer-to-peer web technologies are aiming to replace the building blocks of the web we know with protocols and strategies that solve most of the problems I’ve outlined above. Their goal is a completely distributed, permanent, redundant data storage, where each participating client in the network is storing copies of some of the data available in it.

Above: Source: Wikimedia Commons (http://ift.tt/2xzBAaf)

If you’ve heard about BitTorrent, the following should all sound familiar. In BitTorrent, users of the network share large data files split into smaller blocks (each with a unique ID) without the need for any central authority. In order to download a file, all you need is a “magic” number — a hash — a fingerprint of the content. The BitTorrent client will then find peers that have pieces of the file and download them, until you have all the pieces.

The interesting part is how the peers are found. BitTorrent uses a protocol called Kademlia for this. In Kademlia, each peer on the network has a unique ID number, which is of the same length as the unique block IDs. It stores a block with a particular ID on a node whose ID is “closest” to the ID of the block. For random IDs of both blocks and network peers, the distribution of storage should be pretty uniform across the network. There is a benefit, however, to not choosing the block ID randomly and instead using a cryptographic hash — a unique fingerprint of the content of the block itself. The blocks are content-addressable. This also makes it easy to verify the content of the block (by re-calculating and comparing the fingerprint) and provides the guarantee that given a block ID, it is impossible to download any other data than the original.

The other interesting property of using a content hash for addressing is that by embedding the ID of one block in the content of another, you link the two together in a way that can’t be tampered with. If the content of the linked block is changed, its ID would change and the link would be broken. If the embedded link is changed, the ID of the containing block would change as well.

This mechanism of embedding the ID of one block in the content of another makes it possible to create chains of such blocks (like the blockchain powering Bitcoin and other cryptocurrencies) or even more complicated structures, generally known as Directed Acyclic Graphs, or DAGs for short. (This kind of link is called a Merkle link after the inventor Ralph Merkle. So if you hear someone talking about Merkel DAGs, you know roughly what they are.) One common example of a Merkle DAG is git repositories. Git stores the commit history and all directories and files as blocks in a giant Merkle DAG.

And that leads us to another interesting property of distributed storage based on content-addressing: It’s immutable. The content cannot change in place. Instead, new revisions are stored next to existing ones. Blocks that have not changed between revisions get reused, because they have, by definition, the same ID. This also means identical files cannot be duplicated in such a storage system, translating into efficient storage. So on this new web, every unique cat picture will only exist once (although in multiple redundant copies across the swarm).

Protocols like Kademlia together with Merkle chains and Merkle DAGs give us the tools to model file hierarchies and revision timelines and share them in a large scale peer-to-peer network. There are already protocols that use these technologies to build a distributed storage that fits our needs. One that looks very promising is IPFS.

The problem with names and shared things

Ok, so with the above techniques, we can solve quite a few of the problems I outlined at the beginning: We get distributed, highly redundant storage on devices connected to the web that can keep track of the history of files and keep all the versions around for as long as they are needed. This (almost) solves the availability, capacity, permanence, and content verification problem. It also addresses bandwidth problems — peers send data to each other, so there are no major hotspots/bottlenecks.

We will also need a scalable compute resource, but this shouldn’t be too difficult: Everyone’s laptops and phones are now orders of magnitude more powerful than what most apps need (including fairly complex machine learning computations), and compute is generally pretty horizontally scalable. So as long as we can make every device do the work necessary for its user, there shouldn’t be a major problem.

So now that cat image I want to see on Slack can come from one of my coworkers sitting next to me instead of from the Slack servers (and without crossing any oceans in the process). In order to post a cat picture, though, I need to update a channel in place (i.e., the channel will no longer be what it was before my message, it will have changed). This fairly innocuous sounding thing turns out to be the hard part. (Feel free to skip to the next section if this bit gets too technical.)

The hard part: Updating in place

The concept of an entity that changes over time is really just a human idea to give the world some order and stability in our minds. We can also think about such an entity as just an identity — a name — that takes on a series of different values (which are static, immutable) as time progresses (Rich Hickey explains this really well in his talks Are we there yet? and The value of values). This is a much more natural way of modelling information in a computer, with more natural consequences. If I tell you something, I can no longer change what I told you, or make you unlearn it. Facts, e.g. who the President of the United States is, don’t change over time; they just get superseded by other facts referred to by the same name, the same identity. In the git example, a ref (branch or tag) can point to (hold an ID and thus a value of) a different commit at different times, and making a commit replaces the value it currently holds. The Slack channel would also represent an identity whose values over time are growing lists of messages.

The real trouble is, we’re not alone in the channel. Multiple people try to post messages and change the channel, sometimes simultaneously, and someone needs to decide what the result should be.

In centralized systems, such as pretty much all current web apps, there is a single central entity deciding this “update race” and serializing the events. Whichever event reaches it first wins. In a distributed system, however, everyone is an equal, so there needs to be a mechanism that ensures the network reaches a consensus about the “history of the world.”

Consensus is the most difficult problem to solve for a truly distributed web supporting the whole range of applications we are using to today. It doesn’t only affect concurrent updates, but also any other updates that need to happen “in-place” — updates where the “one source of truth” is changing over time. This issue is particularly difficult for databases, but it also affects other key services, like the DNS. Registering a human name for a particular block ID or series of IDs in a decentralized way means everyone involved needs to agree about a name existing and having a particular meaning, otherwise two different users could see two different files under the same name. Content-based addressing solves this for machines (remember a name can only ever point to one particular piece of matching content), but not humans.

A few major strategies exist for dealing with distributed consensus. One of them involves selecting a relatively small “quorum” of managers with a mechanism for electing a “leader” who decides the truth (if you’re interested, look at the Paxos and Raft protocols). All changes then go through the manager. This is essentially a centralized system that can tolerate a loss of the central deciding entity or an interruption (a “partition”) in the network.

Another approach is a proof-of-work based system like Bitcoin blockchain, where consensus is ensured by making peers solve a puzzle in order to write an update (i.e. add a valid block to a Merkle chain). The puzzle is hard to solve but easy to check, and some additional rules exist to resolve a conflict if it still happens. Several other distributed blockchains use a proof-of-stake based consensus while reducing the energy demands required to solve a puzzle. If you’re interested, you can read about proof of stake in this whitepaper by BitFury.

Yet another approach for specific problems revolves around CRDTs — conflict-free replicated data types, which, for specific cases, don’t suffer from the consensus problem at all. The simplest example is an incrementing counter. If all the updates are just “add one,” as long as we can make sure each update is applied just once, the order doesn’t matter and the result will be the same.

There doesn’t seem to be a clear answer to this problem just yet and there may never be only one, but a whole lot of clever people are working on it, and there are already a lot of interesting solutions out there to pick from. You just need to select the particular trade-off you can afford. The trade-off generally lies in the scale of a swarm you’re aiming for and picking a property of the consensus you’re willing to let go of at least a little — availability or consistency (or, technically, network partitioning, but that seems difficult to avoid in a highly distributed system like the ones we’re talking about). Most applications seem to be able to favor availability over immediate consistency — as long as the state ends up being consistent in reasonable time.

Privacy in the web of public files

One obvious problem that needs addressing is privacy. How do we store content in the distributed swarm of peers without making everything public? If it’s enough to hide things, content addressed storage is a good choice, since in order to find something, you need to know the hash of its content (somewhat like private Gists on Github). So essentially we have three levels of privacy: public, hidden, and private. The answer to the third one, it seems, is in cryptography — strongly encrypting the stored content and sharing the key “out of band” (e.g. physically on paper, by touching two NFC devices, by scanning a QR code, etc.).

Relying on cryptography may sound risky at first (after all, hackers find vulnerabilities all the time), but it’s actually not that much worse than what we do today. In fact, it’s most likely better in practice. Companies and governments generally store sensitive data in ways that aren’t shareable with the public (including the individuals the data is about). Instead, it’s accessible only to an undisclosed number of people employed by the organizations holding the data and is protected, at best, by cryptography based methods anyway. More often than not, if you can gain access to the systems storing this data, you can have all of it.

But if we move instead to storing private data in a way that’s essentially public, we are forced to protect it (with strong encryption) so that it is no good to anyone who gains access to it. This idea is roughly the same as the one behind making security-related software open source so that anyone can look at it and find problems. Knowing how the security works shouldn’t help you break it.

An interesting property of this kind of access control is that once you’ve granted someone access to some data, they will have it forever for that particular revision of the data. You can always change the encryption key for future revisions, of course. This is also no worse than what we have today, even though it may not be obvious: Given access to some data, anyone can always make a private copy of it.

The interesting challenge in this area is coming up with a good system of establishing and verifying identities and sharing private data among a group of people that needs to change over time, e.g. a group of collaborators on a private git repository. It can definitely be done with some combination of private-key cryptography and rotating keys, but making the user experience smooth is likely going to be a challenge.

From the cloud to a … fog

Hard problems to solve notwithstanding, our migration away from the cloud will be quite an exciting future. First, on the technical front, we should get a fair number of improvements out of a peer-to-peer web. Content-addressable storage provides cryptographic verification of content itself without a trusted authority, hosted content is permanent (for as long as any humans are interested in it), and we should see fairly significant speed improvements, even at the edges in the developing world (or even on another planet!), far away from data centers.

At some point even data centers may become a thing of the past. Consumer devices are getting so powerful and ubiquitous that computing power and storage (a computing “substrate”) is almost literally lying in the streets.

For businesses running web applications, this change should translate to significant cost savings and far fewer headaches building reliable digital products. Businesses will also be able to focus less on downtime risk mitigation and more on adding customer value, benefitting everyone. We are still going to be a need for cloud hosted servers, but they will only be one of many similar peers. We could also see heterogeneous applications, where not all the peers are the same — where there are consumer-facing peers and back office peers as part of the same application “swarm” and the difference in access is only in access level based on cryptography.

The other large benefit for both organizations and customers is in the treatment of customer data. When there’s no longer any need to centrally store huge amounts of customer information, there’s less risk of losing such data in bulk. Leaders in the software engineering community (like Joe Armstrong, creator of Erlang, whose talk from Strange Loop 2014 is worth a watch) have long argued that the design of the internet where customers send data to programs owned by businesses is backwards and that we should instead send programs to customers to execute on their privately held data that is never directly shared. Such a model seems much safer and doesn’t in any way prevent businesses from collecting useful customer metrics they need.

And nothing prevents a hybrid approach with some services being opaque and holding on to private data.

This type of application architecture seems a much more natural way to do large scale computing and software services — an Internet closer to the original idea of open information exchange, where anyone can easily publish content for everyone else and control over what can be published and accessed is exercised by consensus of the network’s users, not by private entities owning servers.

This, to me, is hugely exciting. And it’s why I’d like to get a small team together and, within a few weeks, build a small, simple proof of concept mobile application, using some of the technologies mentioned above, to show what can be done with the peer-to-peer web. The only current idea I have that is small enough to build relatively quickly and interesting enough to demonstrate the properties of such approach is a peer-to-peer, truly serverless Twitter clone, which isn’t particularly exciting.

If you’ve got a better idea (which isn’t too hard!), or if you have anything else related to peer-to-peer distributed web to talk about, please tweet at me; I’d love to hear about it!

Viktor Charypar is a Tech Lead at UK-based digital consultancy Red Badger.

via VentureBeat http://ift.tt/2y3loKF

SpeakPipe Now Works on iPads

This Could Be An Interesting Adaptation

SpeakPipe is a neat tool that I have been recommending for years. It is a tool that you can add to your blog to collect voice messages from blog visitors. The messages are automatically recorded and transcribed for you to listen to and or read. Unfortunately, until now it didn’t work if your blog visitors were using iPads. That recently changed when SpeakPipe pushed an update for Safari.

SpeakPipe now works in Safari on iPads and iPhones that are using iOS 11.

Applications for Education

When it is installed on a classroom blog SpeakPipe provides a good way for parents to leave voicemail messages. Having your messages in SpeakPipe lets you dictate a response that can then be emailed back to the person who left the message for you.

SpeakPipe offers another tool called SpeakPipe Voice Recorder. SpeakPipe’s Voice Recorder is a free tool for quickly creating an MP3 voice recording in your web browser on a laptop, Chromebook, Android device, or iOS device. To create a recording with the SpeakPipe Voice Recorder simply go to the website, click “start recording,” and start talking. You can record for up to five minutes on the SpeakPipe Voice Recorder. When you have finished your recording you will be given an embed code that you can use to place it in your blog or website. You will also be given a link to share your recording. Click the link to share your recording and that will take you to a page to download your recording as an MP3 file.

SpeakPipe’s Voice Recorder does not require you to register in order to create and download your audio recordings. The lack of a registration requirement makes it a good choice for students who don’t have email addresses or for anyone else who simply doesn’t want to have to keep track of yet another username and password.

Students could use SpeakPipe’s Voice Recorder to record short audio interviews or to record short audio blog entries.

Teachers could use SpeakPipe’s Voice Recorder to record instructions for students to listen to in lieu of having a substitute teacher read instructions to their students.

This post originally appeared on Free Technology for Teachers
if you see it elsewhere, it has been used without permission
.

 

via Free Technology for Teachers http://ift.tt/2yMQCaa

Is AR Good 4 Teaching & Learning? Or should we get real?

Augmented Reality is nothing new for youth. It has been a part of student’s social experience in apps like Snapchat and it made a big splash when Pokemon Go made its debut. But when it comes to learning, does it have a place?

While seeing an object, insect, or animal up close in an augmented reality is certainly preferably to reading about it in your science text, is it really the best way to help students learn?

Is learning via AR it better than that?

Well, yeah. Probably. It will engage kids with the wow factor for a bit, but then what?

And what about the source? Who wants us to buy into this? A textbook provider? A publisher? A testing company? A hardware or software provider?

What’s in it for them?

And, what about all the other ways to learn? Is it better than that? Is it cost effective?

AR: The Verdict? It depends.

When compared to textbooks, most would agree that AR improves upon the learning experience. It can also help make a textbook a bit more interactive and give it some life.

But what about other options? A powerful novel? A game? A MagniScope? A PBS documentary? A YouTube expert?

To help think about this, I turned to my friends at Modern Learners for some insights.
When thinking about AR, VR, mixed reality, and etc, Gary Stager, asks, are we “investing in reality first” before we invest in such technologies?

That’s a good question. Especially for kids who live in big cities like where I work. In New York City we have cultural neighbourhoods, experiences, some of the finest museums, zoos, gardens, and experts right in the backyard of our schools. Are we taking students there? Or if we aren’t in such communities, are we using resources like Facebook Live, Periscope, and Skype to connect and interact with real people and places in other parts of the world?

When I served as a library media specialist in an inner city school in Harlem, we had immersive experiences in places like Chinatown, Little Italy, and Spanish Harlem. We visited places like El Museo Del Bario and the Tenement Museum. We had scavenger hunts around the neighbourhoods and the museums were happy to freely open their doors to our inner city youth visiting on weekdays.

Of course there are times when a real experience can not occur in place of a virtual experience. For example, a trip to Mars or the Titanic are out of reach. Engaging in or witnessing a dangerous activity for a newbie such as driving a car, plane, train, are other examples.

But even with such extremes, there may be a movie, field trip, game, or museum experience that might provide a better learning experience.

In his Modern Learners podcast Will Richardson puts it this way. If for some reason we really can’t invest in realities, then yes, these “halfway measures for poor kids” make sense, but only if it really is not possible to bring students more authentic opportunities.

But let’s make sure those real experiences are not available before jumping into augmented ones.

Consider this…

When trying to determine what is best for students, here are some questions you can ask:

  • How would a student use this outside of school?

  • Does it help a young person create agency over learning?

  • Does this have a real-life use?

  • Is this better than…

  • Reading about it?

  • Watching it?

  • Doing it?

When you consider those questions, you will be better positioned to determine and explain if augmented reality should become a reality for the students where you teach.

via Lisa Nielsen: The Innovative Edu… http://ift.tt/2yI8Xax

Supporting Students Efforts in Determining Real from Fake News

Our students use the web every day—shouldn’t we expect them to do better at interpreting what they read there? Perhaps, but not necessarily. Often, stereotypes about kids and technology can get in the way of what’s at stake in today’s complex media landscape. Sure, our students probably joined Snapchat faster than we could say “Face Swap,” but that doesn’t mean they’re any better at interpreting what they see in the news and online.

As teachers, we’ve probably seen students use questionable sources in our classrooms, and a recent study from the Stanford History Education Group confirms that students today are generally pretty bad at evaluating the news and other information they see online. Now more than ever, our students need our help. And a big part of this is learning how to fact-check what they see on the web.

In a lot of ways, the web is a fountain of misinformation. But it also can be our students’ best tool in the fight against falsehood. An important first step is giving students trusted resources they can use to verify or debunk the information they find. Even one fact-checking activity could be an important first step toward empowering students to start seeing the web from a fact-checker’s point of view.

Here’s a list of fact-checking resources you and your students can use in becoming better web detectives.

FactCheck.org

A project of the Annenberg Public Policy Center at the University of Pennsylvania, the nonpartisan, nonprofit FactCheck.org says that it “aims to reduce the level of deception and confusion in U.S. politics.” Its entries cover TV ads, debates, speeches, interviews, and news releases. Science teachers take note: The site includes a feature called SciCheck, which focuses on false and misleading scientific claims used for political influence. Beyond individual entries, there also are articles and videos on popular and current topics in the news, among a bevy of other resources.

PolitiFact

From the independent Tampa Bay Times, PolitiFact tracks who’s telling the truth—and who isn’t—in American politics. Updated daily, the site fact-checks statements made by elected officials, candidates, and pundits. Entries are rated on a scale that ranges from “True” to “Pants on Fire” and include links to relevant sources to support each rating. The site’s content is written for adult readers, and students may need teachers’ help with context and direction.

Snopes

The popular online resource Snopes is a one-stop shop to fact-check internet rumors. Entries include everything from so-called urban legends to politics and news stories. Teachers should note that there’s a lot here on a variety of topics—and some material is potentially iffy for younger kids. It’s a great resource for older students—if you can keep them from getting distracted.

OpenSecrets.org

OpenSecrets.org is a nonpartisan organization that tracks the influence of money in U.S. politics. On the site, users can find informative tutorials on topics such as the basics of campaign finance—not to mention regularly updated data reports and analyses on where money has been spent in the American political system. While potentially useful for fact-finding, the site is clearly intended for more advanced adult readers and is best left for older students and sophisticated readers.

Internet Archive Wayback Machine

This one isn’t a site that performs fact-checking. Instead, the Internet Archive Wayback Machine is a tool you can use yourself to fact-check things you find online. Like an internet time machine, the site lets you see how a website looked, and what it said, at different points in the past. Want to see Google’s home page from 1998? Yep, it’s here. Want to see The New York Times’ home page on just about any day since 1996? You can. While they won’t find everything here, there’s still a lot for students to discover. Just beware: The site can be a bit of a rabbit hole—give students some structure before they dive in, because it’s easy to get lost or distracted.

Want to take your students’ knowledge of fact-checking a step further? Engage them in discussions around why these sites and organizations are seen as trusted (and why others might not be trusted as much). Together, look into how each site is funded, who manages it, and how it describes its own fact-checking process.

via Edutopia http://ift.tt/2yiJzak

Jigsaw variant – Pulsing

Pulsing is a jigsaw variant that allows students to benefits from the “hive” mind, but also insists on individual accountability in terms of project and task completion.

I use pulsing a lot for research…. I have attached an example I used with a grade 7 class doing an inquiry on creating a fully functional island with government, a people, culture, population  centre, etc… .

My belief is that structures such as this address the following learning structure considerations…

  1. Student Voice
  2. Accountability
  3. Broadening Perspectives
…and are vitally important in an educational landscape. See below.