Category Archives: Innovation

Free Vector Software: Best Editor and Drawing Tools

Looking for inexpensive ways to create VECTORS or scaleable graphics? Here are some amazing FREE or nearly free tools to get the job done… Hands down winner for features is Inkscape but it’s definitely fussy to learn and I would choose Vectr if you’re just getting started.

 


Paying $500+ for a new CorelDRAW Suite may not be the best investment for a graphic designer, especially a beginner. The same goes for Adobe’s stingy subscriptions. Give or take, most graphic design tools are built on the same principles. And more often than not, we need to create something simple and effective – an icon for our website, a logo, or just have some fun with vector art. Even for serious vector art, we probably never use all the fancy features big companies throw at us.

Thankfully, there is a free vector software that allows us to do what we want. In this article, we will cover the most popular and effective of ones.

Inkscape

Platform: Windows, macOS, Linux
Download link: inkscape.org

Inkscape is often called the GIMP of vector apps. It’s an open-source program with plenty of features that make you wonder why people aren’t selling it. And yet, it’s completely free. The app was released in 2003, and since then has become one of the most popular vector graphics software programs in the world. It’s available in 90 languages and across many platforms, including OS X and Linux.

The node workflow in Inkscape is similar to that of Adobe Illustrator and allows adding new nodes, as well as moving and editing them.

Helpful features like node aligning and symmetry are also available to use. Bezier curves and pencil tools work smoothly with graphic tablets, allowing users to create hand-drawn vector images of any complexity.

Inkscape also has image bitmap tracing, which is for converting rasterized images into vector paths. However, the feature is not as advanced as the one in Adobe Illustrator, and you may need some extra manipulations to make it work, or you’ll have to trace it manually.

Inkscape supports all popular formats, including SVG, EPS, JPG, PNG, PostScript and others.
The list of features the app has is beyond listing, and chances are it can do pretty much everything paid programs like Adobe Illustrator can do.

Full video tutorial here by Logos By Nick

At the very least Inkscape is nearly as good as Adobe Illustrator. Both programs share a very similar workflow and if you’re used to one of them, it makes it easier to make the switch. For logo designs and creating vector graphics, there won’t be much difference in whichever one of you use, apart from having an extra buck to spend on something else.

Tutorials: Inkscape’s website offers a wide variety of tutorials from beginner to advanced here. We also recommend Logos by Nick’s Youtube channel – it offers many excellent tips and workflows from a practicing designer.

Pros:

  • Many features, solid AI alternative
  • Works smoothly with tablets
  • Multiple platforms (Linux included)
  • Extensions

Cons:

  • Rare performance issues with big files
  • Some features are not intuitive – tutorials are needed

Vectr

Platform: Windows, Mac OS, Linux, Chrome OS, Web
Download link: vectr.com

Unlike Inkscape, Vectr is fresh blood in the yard. Which is good, because it means the development is shifted by industry demands, and not by the curse of bloatware.

This free vector art program was released just two years ago, but the pace of its development is truly magnificent. With all the features added recently, it’s hard to imagine what this product could be capable of in the future. However, developers are pressing that one thing stays unchanged – it will be forever free.

To compare Vectr and Inkscape is to make a great mistake – these two products are both in a league of their own. What Vectr lacks in features, it compensates for in intuitiveness. The learning curve on the program is non-existent: if you’re just starting in the design industry, you may be able to create your very first logo in five minutes after you start the program without having to skim through pages of tutorials.

Another advantage of the app is ubiquity – not only is it available on all popular platforms, but it also has a browser version with the same functionality as its desktop counterpart. That means you can work on your designs on your PC and then finish them in an internet cafe in the middle of nowhere.

From Vectr official website

The app allows all standard vector operations – creating and editing geometry shapes, curves, and paths. It supports multiple layers and pages, letting you organize your project. Vectr allows imports in AI, EPS, SVG, PNG, and JPEG file formats.

Another useful feature is the ability to share your projects simply by sending a URL to your colleagues, letting them view and edit it in a workflow similar to Google Docs. The development of full-scale collaboration with multiple people working on the same project simultaneously is currently underway, along with Marketplace and Versioning. You can literally watch the Vectr team’s backlog in the Open Roadmap.

Tutorials: vectr.com/tutorials

Pros:

    • Cross-platform and browser versions
    • Intuitive, easy to use interface
    • Easy sharing of projects
    • Integration with WordPress

    Cons:

    • Need to create an account
    • Some people report crashes – the new features may be unstable
    • Lacks advanced features

    Gravit Designer

    Platform: Windows, macOS, Linux, Chrome OS, Web
    Download link: designer.io

    We might be a bit subjective here. After all, Gravit used our icons. However, Gravit’s developers have much more to offer than having good taste.

    On a feature-wise specter, Gravit falls precisely between Inkscape and Vectr. It has more features than the latter while staying as intuitive as the former. And, like all the apps mentioned in this article, it comes with no price attached. Talk about balance.

    The app allows you to do everything you would expect from a vector software program: Create curves, edit paths, manage layers and use the knife function. It supports SVG, PDF, JPEG,.SKETCH and recently, EPS (finally!) formats for import and export. In addition, you can work on your projects across different platforms; Gravit Cloud allows seamless transition of files between desktop and online versions. A portable version is also available.

    Among the other handy things that Gravit features, there are vector assets that are available within the app. Gravit Designer’s library of assets includes icons, shapes, emojis, and illustrations – all of which can be combined and modified for commercial and non-commercial use.

    Even though the tool is free (according to developers, “there will definitely be areas in Gravit Designer in the foreseeable future, or areas surrounding Gravit Designer, that are subject to a charge”), bugs are being fixed and the most voted for features are being developed. You can see a full description of the new features in the Gravit Designer developers’ blog, along with a bunch of cool tutorials.

    Tutorials: Youtube Playlist

    Pros:

    • Huge library of vector assets out-of-box
    • Intuitive UI and workflow
    • Browser and cross-platform versions, all connected with cloud

    Cons:

    • Not yet clear what features will stay free in the future
    • No advanced features like the ones that can be found in AE or Inkscape

    SVG-edit

    Platform: Web
    Download link: https://github.com/SVG-Edit/svgedit

    SVG-edit is one of those tools that does exactly what their name suggests: It allows you to edit SVG’s, and create your own. It’s an online free vector program that is available in most popular browsers.

    The feature set is standard: Create shapes, draw with a pencil, convert lines to paths, colorize and add images. The result can be exported into popular web formats: WMP, JPG, BMP, GIF, TIFF, and of course SVG itself. The interface is pretty straightforward and reminds you of painting programs from the early 2000’s – nothing fancy here.

    One of the advantages of SVG-edit is that this open-source program can be easily embedded into your website, allowing your users to create and edit SVG’s of their own. The tool also allows you to quickly export results as HTML-code.

    Even though SVG-edit lacks features in comparison to apps mentioned earlier in this article, it still can be useful in some cases – especially for web developers. Freely available Github repository allows modifying the source code to your needs. Another plus would be that the tool is constantly being updated.

    Tutorials: Github

    Pros:

    • Simple, quick to use
    • Open-source web code

    Cons:

    • Lacks advanced features
    • Node-management is not perfect

    Honorable mentions

    Not a vector drawing software itself, but RollApp allows you to run some popular desktop apps online, in your browser. One, in particular, is Inkscape. So if Inkscape being only desktop was the only thing stopping you from using it, RollApp will seal the deal.

    This web tool really has some 90’s vibes to it. But if you’re a fan, give it a try. It’s available for free and there are enough features to make logos or other fancy web graphics. However, if you consider using FatPaint for commercial purposes, developers kindly ask to support them with a Pro subscription.

    A free web vector editor to create vector images. It allows export in SVG, PNG and JPEG. The clean UI is a bonus. It features everything you need to create a logo or an icon inside your browser.

    A graphics editor available for Windows, macOS and Linux. It’s primarily used for building math graphs and illustrations (the ones you often see in school math books). But if you feel like life is not hard enough yet, you can try drawing vector art using this tool.

    If you’re interested in free raster drawing software, check out our Best Free Drawing Software: Five Candidates article

    Have an interesting article to share with our readers? Let’s get it published.

via 80,300 Free Icons (SVG, PNG) https://ift.tt/2NcM8mJ

The Cost of Innovation: When Does It Make Cents to Buy In?

Future Reality
Future Reality

I’ve been deep in conversations with colleagues as well as friends and anyone else who will listen about Learning Technology’s next three-year plan and what might be best to include in it. This has surfaced topics of discussion that have ranged from computational thinking & coding to mobile learning practices to assessment & digital portfolios; what these might look like and how best to implement these successfully at various grade levels; and our thoughts about what the next great innovation in the educational technology space will be.

We often turn to documents like the Horizon Report for guidance on such matters. It provides a look at the short, middle and long-term outlook regarding innovative practices and supportive technologies, the cost/benefits of these and likely adoption timing of said practices/technology combinations in various educational spaces (k-8, high school, and beyond).

One topic that continually is on our radar is Virtual Reality or VR. Check out some examples here : Flipside, Co-Spaces, Tinkercad, Unity, Sketchfab; 8 Amazing Uses of Vr That Will Blow Your Mind ; When VR Meets Education; 7 Top Educational Virtual Reality Apps ; Real Uses of Virtual Reality in Education; 10 COMPANIES WORKING ON EDUCATION IN VIRTUAL REALITY. A very promising technology that “refers to computer-generated environments that simulate the physical presence of people and/or objects and realistic sensory experiences. At a basic level, this technology takes the form of 3D images that users interact with and manipulate via a computer interface.”

“VR devices break down into two categories: high-end headsets, such as the Oculus Rift, HTC Vive, or Sony PlayStation VR, and budget headsets that include the Samsung Gear VR and Google Cardboard along with accessories like headphones and haptic controller accessories.”

“Contemporary applications allow users to more authentically ‘feel’ the objects in these displays through gesture-based and haptic devices, which provide tactile information through force feedback. VR models can be created using a variety of CAD software such as Flipside, Co-Spaces, Tinkercad, Unity, and Sketchfab. These content creation tools along with the viewers can make learning more authentic, allow for empathetic experiences, and increase student engagement.” – excerpts from the Horizon Report 2017

For the last number of years, VR has held a position in the “four to five years out” and has only just this year moved into the “two to three years out” position in the Horizon Report. This is excellent news for educators and learners alike. It brings the benefits of VR learning and the creative spaces provided by application environments like Flipside or Co-Spaces to be leveraged in classrooms closer to reality.

There is no denying that VR in education has many educational benefits. It doesn’t take much looking on the Internet or elsewhere to find resources dedicated to this topic:

As Terry Heick said in Why Virtual Reality is So Important, “Through the use of digital technology, virtual realities can be designed precisely for human interaction for very specific reasons to create experiences not otherwise possible.

By suspending disbelief the same way we do when we read a novel or watch a movie, an artificial reality can be designed to enable experiential learning, scenario-based learning, social learning, workplace training, and more. Virtual reality can be used for pure entertainment–digital toys, video games, or to swim with whales.”

There are many reasons to laud the possibilities inherent in this blossoming new technology… Sylvia Duckworth presents some these in her Sketchnote fashion.

10 Reasons To Use Virtual Reality In The Classroom
10 Reasons To Use Virtual Reality In The Classroom

Here’s a few other samplings from around the Net:

  • Not possible in reality is likely possible in virtual reality
  • Virtual game-based experience increases students’ motivation/engagement
  • Bridging cultures and fostering understanding among young students
  • VR allow learners to collaboratively construct architectural models, recreations of historic or natural sites and other spatial renderings
  • VR engages students in topics related to literature, history and economics by offering a deeply immersive sense of place and time, whether historic or evolving.
SAMR Continuum
SAMR Continuum

If we were to look at the SAMR continuum model originally created by Ruben R. Puentedura, many VR tools would be considered transformative in nature, redefining how traditional tasks would be done; changing them so dramatically that the original task could not be completed in the same way NOT using the tool.

Co-Spaces Edu is a VR Creation/Coding tool that is gaining ground in the Division. Co-Spaces Edu is a creative platform for all ages and subjects. It complements traditional teaching methods by immersing students into a world where they can create, consume and connect with the curriculum on a completely new level, even through the revolutionary visual medium of virtual reality! Learners/Teachers can easily create 3D & VR content. They can code spaces with Blockly, JavaScript or TypeScript. Learners/Teachers can explore creations even in VR. Teachers can manage and observe their students’ creation process.

Sounds impressive, and it is! What’s not to like?

Recently some select members of our Division had a bit of tour of an amazingly promising tool called Flipside. This tool is probably best described as a VR film/animation making VR environment. On their website, Flipside describe their tool as “your own virtual TV studio. With nothing more than an Oculus Rift or HTC Vive, you can produce your own animated shows in real-time, whether they’re recorded or streamed live to the web.” Sounds incredible and the experience for “VR-naut” was immersive and unlike anything experienced before. Fully enriching, multiple opportunities for cross-curricular connections, much more flexible and forgiving than an actual film/animation making environment. Truly amazing, gobsmacking even! The ability for a learner to succeed in such an environment is huge.

What’s not to like? At first glance, nothing really.

Nothing that is until we look at accessibility. By accessibility, I am referring to total cost of ownership in terms of funds, at least, necessary for purchasing the hardware to provide the above enriched experience for a single VR-naut. As you may have surmised, It’s not inexpensive.

And here’s where we come to the crux of this article and perhaps speaks to the reason why VR hovers for now, just out of reach in the adoption timing stated in the Horizon Report: the costs may not yet justify the benefits.

Let’s look at the two scenarios, not from an educational stand point, because both provide experiences that are transformative and valuable in nature, but from a cost stand point:

Both platforms require a VR headset of some sort. Here’s the lay of the land in that department. Despite the fact that VR is still developing, some progress has been seen in the economic scaling of this technology. The cost to the consumer of VR hardware (headsets, in particular, but also prefer computer desktops to drive the headsets, particularly the Video RAM, RAM and overall speed requirements which are hefty) are steadily declining, as noted in the head­-mounted displays (HMDs) commercially available today: Google Cardboard for $11 and Samsung Gear VR for $80 or the Oculus Rift, a desktop VR device, is available for $599/HTC VIVE retailing for $799.

The “For Now” Cost Breakdown:

Co-spaces Edu:

  1. Google Cardboard: $10
  2. With an iPhone (possibly older phones and iPods), or Android Phone: $199 (or personal devices)
  3. Platform costs – Basic : free, Pro: USD $75 per year (best use for education)

Flipside

  1. Oculus Rift for $599/HTC VIVE for $799
  2. Desktop Device (minimum requirements 8GB RAM, intel i5 or better, NVIDIA GTX 1060 / AMD Radeon RX 480 or greater). One should note that minimum requirements are just that. This will allow ONE OCULUS RIFT/VIVE device to function with your Desktop, but not necessarily within the game or program that you want to use it with. Devices such as these will run somewhere in the the $1500-$2000 range and if you would like more than one headset connection, you will need more RAM, greater number of connection ports, potentially a faster graphics card and processor.
  3. Platform costs: at the moment an indy licence is $200 monthly for a single seat, or for a business licence, $1000 monthly. This may change as educational licensing is discussed, but this is not in place at the moment.

The cost to get ONE VR-naut into VR-land is approximately $211 for Cospaces per year( $10 + $199 + (US$75/50seats) ) and Flipside will be at least $4,099 per year ($599 Oculus Rift + $1500 Minimum Requirements + ($200 x 10 months of the school year) ).

VR has a definitive place of value, but are these kinds of costs an educational reality when so many other critical learning technology priorities are pressing as well?


Here are the facts as of the writing of this article and as best as I can present them. We have a transformative technology with great potential for enhancing some learners’ pathways.

The issue is, it will only impact a very few at present. Costs of Ownership help inform my decision making in many situations especially related to bigger ticket educational items as do solid educational rationales. Are the costs for one-person-at-a-time cycling through an experience, assembly-line style, to get at the true benefits of an incredible technology worth it? I not entirely certain, for a number of reasons:

    1. First off, I am sure that this is not meaningful practice! Using technology for technology sake when we can’t ensure we implement it using educational practices that are solid and effective seems backwards, inefficient and, at best, exclusive. Let’s take a minute to harken back to Smartboard days! In their heyday, these devices were a hot technology commodity, despite the fact they were essentially large mice allowing initially one (and much later on, up to four people) to manipulate objects at the same time on an interactive surface (although truthfully, in my experience, the implementation is most always done with one person touching the board at a time). When looking at the SAMR Continuum model, Smartboards primarily enhance learning, they tend not to be used in a transformative way. Teachers simply took existing ways of doing things and digitized them with no or hardly any functional change – for example a work sheet could be presented and completed digitally, usually by one person, with the rest of the class looking on. Not a terribly effective, efficient or a fully class-engaging activity.Sounding familiar? We may be setting up a similar situation with the VR-naut in the Flipside scenario. A school may only be able to afford one VR setup for the school. So one VR-naut gets to drive and be fully immersed in and benefit from the VR experience. And what of the rest of the class? Well, they can watch. Or they could be involved in other parts of a larger process involving planning for the VR-naut experience when it’s their turn, or supporting the existing VR-naut. But they are NOT experiencing the VR experience directly or often. This could be a problem. So how is this issue best addressed?
    2. Secondly, spending a lot money to impact a few rather than having a solid plan for impacting the many seems wasteful in times of fiscal responsibility and restraint.
    3. Thirdly, even the soothsayers and technology pundits involved in assembling the venerable Horizon Report peg VR technology as being “2 to 3 or more years out” of mainstream education. Should we wait then for the right time?
    4. Finally, the markets will hopefully play in our favour: prices for these devices, the headsets in particular, will continue to drop if the developers of such tools want to break into the educational markets at all.

It’s a tough decision to make. This decision is made even more difficult when we consider things like:

  • Are all schools device equitable? Do all schools have the same proportion of devices available per student? Is there a reasonable ratio of devices per student in the Division (say 3:1)? Do all students have reasonable access to devices?
  • Do all schools have ubiquitous wireless enough to handle B.Y.O.D. needs as well as all Divisional devices in the building? What’s needed to bolster and augment this in buildings? How are dead spaces addressed?
  • Are all schools prepared for a mobile learning, maker-space learning environments and what these mean in terms of pedagogical changes? Is the training in place? Does VR learning fit in a mobile learning milieu easily (hardware-wise/pedegoical-wise)?
  • What about assessment training and connecting this meaningfully to digital portfolio development? What supports are needed here? What are the costs?

These are all incredibly vital Learning Technology initiatives that need attention, training dollars and development & resource money. Where will this come from if monies are being redirected in large amounts to VR? Can this funding gap be offset possibly by parent groups? Possibly by fund raising? Possibly grants? Or even from school-based decision making. None of these options are sustainable or even desirable necessarily as they can promote the “haves and have-nots” syndrome. Yes, we could talk about priorities and yes, VR could come out on top. In my opinion, this could be a tragic mistake. The list above contains too many highly critical items, much more important and pressing than pushing forward into VR at this moment.

However, I wanted to be able to go back to my colleagues with some information to assist in trying to figure out how best to build this idea of VR learning successfully into our next three year plan. We could, after all, start small.

To that end, I have been casually surveying administrators, teachers, parents, business people from around Winnipeg over the holiday to get a sense of their thinking regarding this innovative technology idea. Here are their thoughts in brief:

  • Great idea. Love this VR stuff. Can you really create like that in VR? Virtual Reality is the future. I can’t wait for this to be brought into schools. How much time will my kid get to use this?
  • How can we justify these costs when classrooms can’t even manage wireless?
  • What about just regular devices for students? Are there enough of those available to students?
  • What did you say the costs were for just one student to use this technology again? Seriously? Your joking?
  • What about balance? Surely we don’t have to jump immediately into every new thing as it comes out!
  • What about evaluating things? Can’t we see if the benefits really justify the costs? How is this done effectively?

The general feeling was that the technology is incredible, but too costly at the moment. So how to proceed?


Maybe we need to set our sights on the what schools can actually use now rather on what they may be able to afford for all sometime. I have heard the term “pockets of innovation” over-used too often lately. People have used it to rationalize the purchasing of expensive technology before really evaluating whether that technology is an appropriate purchase for the learners for whom it’s intended. I find this statement used this way supercilious and in the end an unwise rationale. So not a pocket of innovation! What will our focus be then?

We should probably try to start small. Looking at what’s affordable today, we have the Google Cardboard glasses option and Co-spaces or Tinkercad that seem within reach. Flipside seems out of reach for the time being, despite it’s incredible potential. In fact, anything related to higher end headsets like the Oculus Rift or the VIVE seems financially problematic at this time! The requirements are simply too rich for the next few years. Building this into a three-year plan? Maybe a Professional Learning Community (P.L.C.) to explore Co-spaces, the effective use of Google cardboard, effective; efficient teaching/learning practices within and surround a VR environment; how VR and mobile learning dovetail; perhaps where VR fits in the new LwICT continuum. Those are the kinds of investigations we should be perhaps exploring in the plan.

I think it behooves us to take a step back, to slow down and to look at the quickly blossoming landscape of both augmented and virtual reality and see how it makes sense to infuse it into our existing system. This is going to take some careful thinking from a group of intelligent people. How do we start? How do we make it learning/learner focused? How can we make it cost-effective? How can it be sustainable? This is possible and perhaps the WSD VR PLC is the way to make this a VIRTUAL REALITY!

Today’s news: Real or fake? [Infographic]

Today Students have a blizzard of information at the ready: on devices in their pockets, at school, in their homes, by their bedsides on their wrists… It’s almost a constant information “on” world.

Information and content floods to their eyes and ears in never-ending streams, torrents, downloads, feeds, & casts. How do they determine what is real an what is not. What matters and what doesn’t? Here’s a cheat sheet to help out.


At a time when misinformation and fake news spread like wildfire online, the critical need for media literacy education has never been more pronounced. The evidence is in the data:

  • 80% of middle schoolers mistake sponsored content for real news.
  • 3 in 4 students can’t distinguish between real and fake news on Facebook.
  • Fewer than 1 in 3 students are skeptical of biased news sources.

Students who meet the ISTE Standards for Students are able to critically select, evaluate and synthesize digital resources. That means understanding the difference between real and fake news.

There are several factors students should consider when evaluating the validity of news and resources online. Use the infographic below to help your students understand how to tell them apart.

Click on the infographic to open a printable PDF.

Media-Literacy_Real-News-Infographic_11_2017

Learn more about teaching K-12 students how to evaluate and interpret media messages in the book Media Literacy in the K-12 Classroom by Frank Baker.

via www.iste.org http://ift.tt/2yq5zBQ

The end of the cloud is coming

Viktor Charypar is a Tech Lead at UK-based digital consultancy Red Badger.

We’re facing the end of the cloud. It’s a bold statement, I know, and maybe it even sounds a little mad. But bear with me.

The conventional wisdom about running server applications, be it web apps or mobile app backends, is that the future is in the cloud. Amazon, Google, and Microsoft are adding layers of tools to their cloud offerings to make running server software more and more easy and convenient, so it would seem that hosting your code in AWS, GCP, or Azure is the best you can do — it’s convenient, cheap, easy to fully automate, you can scale elastically … I could keep going. So why am I predicting the end of it all?

A few reasons:

It can’t meet long-term scaling requirements. Building a scalable, reliable, highly available web application, even in the cloud, is pretty difficult. And if you do it right and make your app a huge success, the scale will cost you both money and effort. Even if your business is really successful, you eventually hit the limits of what the cloud, the web itself can do: The compute speed and storage capacity of computers are growing faster than the bandwidth of the networks. Ignoring the net neutrality debate, this may not be a problem for most (apart from Netflix and Amazon) at the moment, but it will be soon. The volumes of data we’re pushing through the network are growing massively as we move from HD, to 4k to 8k, and soon there will be VR datasets to move around.

This is a problem mostly because of the way we’ve organized the web. There are many clients that want to get content and use programs and only a relatively few servers that have those programs and content. When someone posts a funny picture of a cat on Slack, even though I’m sitting next to 20 other people who want to look at that same picture, we all have to download it from the server where it’s hosted, and the server needs to send it 20 times.

As servers move to the cloud, i.e. onto Amazon’s or Google’s computers in Amazon’s or Google’s data centers, the networks close to these places need to have incredible throughput to handle all of this data. There also have to be huge numbers of hard drives that store the data for everyone and CPUs that push it through the network to every single person that wants it. This gets worse with the rise of streaming services.

All of that activity requires a lot of energy and cooling and makes the whole system fairly inefficient, expensive, and bad for the environment.

It’s centralized and vulnerable. The other issue with centrally storing our data and programs is availability and permanence. What if Amazon’s data center gets flooded, hit by an asteroid, or destroyed by a tornado? Or, less drastically, what if it loses power for a while? The data stored on its machines now can’t be accessed temporarily or even gets lost permanently.

We’re generally mitigating this problem by storing data in multiple locations, but that only means more data centers. That may greatly reduce the risk of accidental loss, but how about the data that you really, really care about? Your wedding videos, pictures of your kids growing up, or the important public information sources, like Wikipedia. All of that is now stored in the cloud — on Facebook, in Google Drive, iCloud, or Dropbox and others. What happens to the data when any of these services go out of business or lose funding? And even if they don’t, it is pretty restricting that to access your data, you have to go to their service, and to share it with friends, they have to go through that service too.

It demands trust but offers no guarantees. The only way for your friends to trust that the data they get is the data you sent is by trusting the middleman and their honesty. This is okay in most cases, but websites and networks we use are operated by legal entities registered in nation states, and the governments of these nations have the power to force them to do a lot of things. While most of the time, this is a good thing and is used to help solve crime or remove illegal content from the web, there are also many cases where this power has been abused.

Just a few weeks ago, the Spanish government did everything in its power to stop an independence referendum in the Catalonia region, including blocking information websites telling people where to vote. Blocking inconvenient websites or secretly modifying content on its way to users has long been a standard practice in places like China. While free speech is probably not a high-priority issue for most Westerners, it would be nice to keep the internet as free and open as it was intended to be and have a built-in way of verifying that content you are reading is the content the authors published.

It makes us — and our data — sitting ducks. The really scary side of the highly centralized internet is the accumulation of personal data. Large companies that provide services we all need to use in one way or another are sitting on monumental caches of people’s data — data that gives them enough information about you to predict what you’re going to buy, who you’re going to vote for, when you are likely to buy a house, even how many children you’re likely to have. Information that is more than enough to get a credit card, a loan, or even buy a house in your name.

You may be ok with that. After all, they were trustworthy enough for you to give them your information in the first place, but it’s not them you need to worry about. It’s everyone else. Earlier this year, credit reporting agency Equifax lost data on 140 million of its customers in one of the biggest data breaches in history. That data is now public. We can dismiss this as a once in a decade event that could have been prevented if we’d been more careful, but it is becoming increasingly clear that data breaches like this are very hard to prevent entirely and too dangerous to tolerate. The only way to really prevent them is to not gather the data on that scale in the first place.

So, what will replace the cloud?

An internet powered largely by client-server protocols (like HTTP) and security based on trust in a central authority (like TLS), is flawed and causes problems that are fundamentally either really hard or impossible to solve. It’s time to look for something better — a model where nobody else is storing your personal data, large media files are spread across the entire network, and the whole system is entirely peer-to-peer and serverless (and I don’t mean “serverless” in the cloud-hosted sense here, I mean literally no servers).

I’ve been reading extensively about emerging technologies in this space and have become pretty convinced that peer-to-peer is where we’re inevitably going. Peer-to-peer web technologies are aiming to replace the building blocks of the web we know with protocols and strategies that solve most of the problems I’ve outlined above. Their goal is a completely distributed, permanent, redundant data storage, where each participating client in the network is storing copies of some of the data available in it.

Above: Source: Wikimedia Commons (http://ift.tt/2xzBAaf)

If you’ve heard about BitTorrent, the following should all sound familiar. In BitTorrent, users of the network share large data files split into smaller blocks (each with a unique ID) without the need for any central authority. In order to download a file, all you need is a “magic” number — a hash — a fingerprint of the content. The BitTorrent client will then find peers that have pieces of the file and download them, until you have all the pieces.

The interesting part is how the peers are found. BitTorrent uses a protocol called Kademlia for this. In Kademlia, each peer on the network has a unique ID number, which is of the same length as the unique block IDs. It stores a block with a particular ID on a node whose ID is “closest” to the ID of the block. For random IDs of both blocks and network peers, the distribution of storage should be pretty uniform across the network. There is a benefit, however, to not choosing the block ID randomly and instead using a cryptographic hash — a unique fingerprint of the content of the block itself. The blocks are content-addressable. This also makes it easy to verify the content of the block (by re-calculating and comparing the fingerprint) and provides the guarantee that given a block ID, it is impossible to download any other data than the original.

The other interesting property of using a content hash for addressing is that by embedding the ID of one block in the content of another, you link the two together in a way that can’t be tampered with. If the content of the linked block is changed, its ID would change and the link would be broken. If the embedded link is changed, the ID of the containing block would change as well.

This mechanism of embedding the ID of one block in the content of another makes it possible to create chains of such blocks (like the blockchain powering Bitcoin and other cryptocurrencies) or even more complicated structures, generally known as Directed Acyclic Graphs, or DAGs for short. (This kind of link is called a Merkle link after the inventor Ralph Merkle. So if you hear someone talking about Merkel DAGs, you know roughly what they are.) One common example of a Merkle DAG is git repositories. Git stores the commit history and all directories and files as blocks in a giant Merkle DAG.

And that leads us to another interesting property of distributed storage based on content-addressing: It’s immutable. The content cannot change in place. Instead, new revisions are stored next to existing ones. Blocks that have not changed between revisions get reused, because they have, by definition, the same ID. This also means identical files cannot be duplicated in such a storage system, translating into efficient storage. So on this new web, every unique cat picture will only exist once (although in multiple redundant copies across the swarm).

Protocols like Kademlia together with Merkle chains and Merkle DAGs give us the tools to model file hierarchies and revision timelines and share them in a large scale peer-to-peer network. There are already protocols that use these technologies to build a distributed storage that fits our needs. One that looks very promising is IPFS.

The problem with names and shared things

Ok, so with the above techniques, we can solve quite a few of the problems I outlined at the beginning: We get distributed, highly redundant storage on devices connected to the web that can keep track of the history of files and keep all the versions around for as long as they are needed. This (almost) solves the availability, capacity, permanence, and content verification problem. It also addresses bandwidth problems — peers send data to each other, so there are no major hotspots/bottlenecks.

We will also need a scalable compute resource, but this shouldn’t be too difficult: Everyone’s laptops and phones are now orders of magnitude more powerful than what most apps need (including fairly complex machine learning computations), and compute is generally pretty horizontally scalable. So as long as we can make every device do the work necessary for its user, there shouldn’t be a major problem.

So now that cat image I want to see on Slack can come from one of my coworkers sitting next to me instead of from the Slack servers (and without crossing any oceans in the process). In order to post a cat picture, though, I need to update a channel in place (i.e., the channel will no longer be what it was before my message, it will have changed). This fairly innocuous sounding thing turns out to be the hard part. (Feel free to skip to the next section if this bit gets too technical.)

The hard part: Updating in place

The concept of an entity that changes over time is really just a human idea to give the world some order and stability in our minds. We can also think about such an entity as just an identity — a name — that takes on a series of different values (which are static, immutable) as time progresses (Rich Hickey explains this really well in his talks Are we there yet? and The value of values). This is a much more natural way of modelling information in a computer, with more natural consequences. If I tell you something, I can no longer change what I told you, or make you unlearn it. Facts, e.g. who the President of the United States is, don’t change over time; they just get superseded by other facts referred to by the same name, the same identity. In the git example, a ref (branch or tag) can point to (hold an ID and thus a value of) a different commit at different times, and making a commit replaces the value it currently holds. The Slack channel would also represent an identity whose values over time are growing lists of messages.

The real trouble is, we’re not alone in the channel. Multiple people try to post messages and change the channel, sometimes simultaneously, and someone needs to decide what the result should be.

In centralized systems, such as pretty much all current web apps, there is a single central entity deciding this “update race” and serializing the events. Whichever event reaches it first wins. In a distributed system, however, everyone is an equal, so there needs to be a mechanism that ensures the network reaches a consensus about the “history of the world.”

Consensus is the most difficult problem to solve for a truly distributed web supporting the whole range of applications we are using to today. It doesn’t only affect concurrent updates, but also any other updates that need to happen “in-place” — updates where the “one source of truth” is changing over time. This issue is particularly difficult for databases, but it also affects other key services, like the DNS. Registering a human name for a particular block ID or series of IDs in a decentralized way means everyone involved needs to agree about a name existing and having a particular meaning, otherwise two different users could see two different files under the same name. Content-based addressing solves this for machines (remember a name can only ever point to one particular piece of matching content), but not humans.

A few major strategies exist for dealing with distributed consensus. One of them involves selecting a relatively small “quorum” of managers with a mechanism for electing a “leader” who decides the truth (if you’re interested, look at the Paxos and Raft protocols). All changes then go through the manager. This is essentially a centralized system that can tolerate a loss of the central deciding entity or an interruption (a “partition”) in the network.

Another approach is a proof-of-work based system like Bitcoin blockchain, where consensus is ensured by making peers solve a puzzle in order to write an update (i.e. add a valid block to a Merkle chain). The puzzle is hard to solve but easy to check, and some additional rules exist to resolve a conflict if it still happens. Several other distributed blockchains use a proof-of-stake based consensus while reducing the energy demands required to solve a puzzle. If you’re interested, you can read about proof of stake in this whitepaper by BitFury.

Yet another approach for specific problems revolves around CRDTs — conflict-free replicated data types, which, for specific cases, don’t suffer from the consensus problem at all. The simplest example is an incrementing counter. If all the updates are just “add one,” as long as we can make sure each update is applied just once, the order doesn’t matter and the result will be the same.

There doesn’t seem to be a clear answer to this problem just yet and there may never be only one, but a whole lot of clever people are working on it, and there are already a lot of interesting solutions out there to pick from. You just need to select the particular trade-off you can afford. The trade-off generally lies in the scale of a swarm you’re aiming for and picking a property of the consensus you’re willing to let go of at least a little — availability or consistency (or, technically, network partitioning, but that seems difficult to avoid in a highly distributed system like the ones we’re talking about). Most applications seem to be able to favor availability over immediate consistency — as long as the state ends up being consistent in reasonable time.

Privacy in the web of public files

One obvious problem that needs addressing is privacy. How do we store content in the distributed swarm of peers without making everything public? If it’s enough to hide things, content addressed storage is a good choice, since in order to find something, you need to know the hash of its content (somewhat like private Gists on Github). So essentially we have three levels of privacy: public, hidden, and private. The answer to the third one, it seems, is in cryptography — strongly encrypting the stored content and sharing the key “out of band” (e.g. physically on paper, by touching two NFC devices, by scanning a QR code, etc.).

Relying on cryptography may sound risky at first (after all, hackers find vulnerabilities all the time), but it’s actually not that much worse than what we do today. In fact, it’s most likely better in practice. Companies and governments generally store sensitive data in ways that aren’t shareable with the public (including the individuals the data is about). Instead, it’s accessible only to an undisclosed number of people employed by the organizations holding the data and is protected, at best, by cryptography based methods anyway. More often than not, if you can gain access to the systems storing this data, you can have all of it.

But if we move instead to storing private data in a way that’s essentially public, we are forced to protect it (with strong encryption) so that it is no good to anyone who gains access to it. This idea is roughly the same as the one behind making security-related software open source so that anyone can look at it and find problems. Knowing how the security works shouldn’t help you break it.

An interesting property of this kind of access control is that once you’ve granted someone access to some data, they will have it forever for that particular revision of the data. You can always change the encryption key for future revisions, of course. This is also no worse than what we have today, even though it may not be obvious: Given access to some data, anyone can always make a private copy of it.

The interesting challenge in this area is coming up with a good system of establishing and verifying identities and sharing private data among a group of people that needs to change over time, e.g. a group of collaborators on a private git repository. It can definitely be done with some combination of private-key cryptography and rotating keys, but making the user experience smooth is likely going to be a challenge.

From the cloud to a … fog

Hard problems to solve notwithstanding, our migration away from the cloud will be quite an exciting future. First, on the technical front, we should get a fair number of improvements out of a peer-to-peer web. Content-addressable storage provides cryptographic verification of content itself without a trusted authority, hosted content is permanent (for as long as any humans are interested in it), and we should see fairly significant speed improvements, even at the edges in the developing world (or even on another planet!), far away from data centers.

At some point even data centers may become a thing of the past. Consumer devices are getting so powerful and ubiquitous that computing power and storage (a computing “substrate”) is almost literally lying in the streets.

For businesses running web applications, this change should translate to significant cost savings and far fewer headaches building reliable digital products. Businesses will also be able to focus less on downtime risk mitigation and more on adding customer value, benefitting everyone. We are still going to be a need for cloud hosted servers, but they will only be one of many similar peers. We could also see heterogeneous applications, where not all the peers are the same — where there are consumer-facing peers and back office peers as part of the same application “swarm” and the difference in access is only in access level based on cryptography.

The other large benefit for both organizations and customers is in the treatment of customer data. When there’s no longer any need to centrally store huge amounts of customer information, there’s less risk of losing such data in bulk. Leaders in the software engineering community (like Joe Armstrong, creator of Erlang, whose talk from Strange Loop 2014 is worth a watch) have long argued that the design of the internet where customers send data to programs owned by businesses is backwards and that we should instead send programs to customers to execute on their privately held data that is never directly shared. Such a model seems much safer and doesn’t in any way prevent businesses from collecting useful customer metrics they need.

And nothing prevents a hybrid approach with some services being opaque and holding on to private data.

This type of application architecture seems a much more natural way to do large scale computing and software services — an Internet closer to the original idea of open information exchange, where anyone can easily publish content for everyone else and control over what can be published and accessed is exercised by consensus of the network’s users, not by private entities owning servers.

This, to me, is hugely exciting. And it’s why I’d like to get a small team together and, within a few weeks, build a small, simple proof of concept mobile application, using some of the technologies mentioned above, to show what can be done with the peer-to-peer web. The only current idea I have that is small enough to build relatively quickly and interesting enough to demonstrate the properties of such approach is a peer-to-peer, truly serverless Twitter clone, which isn’t particularly exciting.

If you’ve got a better idea (which isn’t too hard!), or if you have anything else related to peer-to-peer distributed web to talk about, please tweet at me; I’d love to hear about it!

Viktor Charypar is a Tech Lead at UK-based digital consultancy Red Badger.

via VentureBeat http://ift.tt/2y3loKF

SpeakPipe Now Works on iPads

This Could Be An Interesting Adaptation

SpeakPipe is a neat tool that I have been recommending for years. It is a tool that you can add to your blog to collect voice messages from blog visitors. The messages are automatically recorded and transcribed for you to listen to and or read. Unfortunately, until now it didn’t work if your blog visitors were using iPads. That recently changed when SpeakPipe pushed an update for Safari.

SpeakPipe now works in Safari on iPads and iPhones that are using iOS 11.

Applications for Education

When it is installed on a classroom blog SpeakPipe provides a good way for parents to leave voicemail messages. Having your messages in SpeakPipe lets you dictate a response that can then be emailed back to the person who left the message for you.

SpeakPipe offers another tool called SpeakPipe Voice Recorder. SpeakPipe’s Voice Recorder is a free tool for quickly creating an MP3 voice recording in your web browser on a laptop, Chromebook, Android device, or iOS device. To create a recording with the SpeakPipe Voice Recorder simply go to the website, click “start recording,” and start talking. You can record for up to five minutes on the SpeakPipe Voice Recorder. When you have finished your recording you will be given an embed code that you can use to place it in your blog or website. You will also be given a link to share your recording. Click the link to share your recording and that will take you to a page to download your recording as an MP3 file.

SpeakPipe’s Voice Recorder does not require you to register in order to create and download your audio recordings. The lack of a registration requirement makes it a good choice for students who don’t have email addresses or for anyone else who simply doesn’t want to have to keep track of yet another username and password.

Students could use SpeakPipe’s Voice Recorder to record short audio interviews or to record short audio blog entries.

Teachers could use SpeakPipe’s Voice Recorder to record instructions for students to listen to in lieu of having a substitute teacher read instructions to their students.

This post originally appeared on Free Technology for Teachers
if you see it elsewhere, it has been used without permission
.

 

via Free Technology for Teachers http://ift.tt/2yMQCaa

Is AR Good 4 Teaching & Learning? Or should we get real?

Augmented Reality is nothing new for youth. It has been a part of student’s social experience in apps like Snapchat and it made a big splash when Pokemon Go made its debut. But when it comes to learning, does it have a place?

While seeing an object, insect, or animal up close in an augmented reality is certainly preferably to reading about it in your science text, is it really the best way to help students learn?

Is learning via AR it better than that?

Well, yeah. Probably. It will engage kids with the wow factor for a bit, but then what?

And what about the source? Who wants us to buy into this? A textbook provider? A publisher? A testing company? A hardware or software provider?

What’s in it for them?

And, what about all the other ways to learn? Is it better than that? Is it cost effective?

AR: The Verdict? It depends.

When compared to textbooks, most would agree that AR improves upon the learning experience. It can also help make a textbook a bit more interactive and give it some life.

But what about other options? A powerful novel? A game? A MagniScope? A PBS documentary? A YouTube expert?

To help think about this, I turned to my friends at Modern Learners for some insights.
When thinking about AR, VR, mixed reality, and etc, Gary Stager, asks, are we “investing in reality first” before we invest in such technologies?

That’s a good question. Especially for kids who live in big cities like where I work. In New York City we have cultural neighbourhoods, experiences, some of the finest museums, zoos, gardens, and experts right in the backyard of our schools. Are we taking students there? Or if we aren’t in such communities, are we using resources like Facebook Live, Periscope, and Skype to connect and interact with real people and places in other parts of the world?

When I served as a library media specialist in an inner city school in Harlem, we had immersive experiences in places like Chinatown, Little Italy, and Spanish Harlem. We visited places like El Museo Del Bario and the Tenement Museum. We had scavenger hunts around the neighbourhoods and the museums were happy to freely open their doors to our inner city youth visiting on weekdays.

Of course there are times when a real experience can not occur in place of a virtual experience. For example, a trip to Mars or the Titanic are out of reach. Engaging in or witnessing a dangerous activity for a newbie such as driving a car, plane, train, are other examples.

But even with such extremes, there may be a movie, field trip, game, or museum experience that might provide a better learning experience.

In his Modern Learners podcast Will Richardson puts it this way. If for some reason we really can’t invest in realities, then yes, these “halfway measures for poor kids” make sense, but only if it really is not possible to bring students more authentic opportunities.

But let’s make sure those real experiences are not available before jumping into augmented ones.

Consider this…

When trying to determine what is best for students, here are some questions you can ask:

  • How would a student use this outside of school?

  • Does it help a young person create agency over learning?

  • Does this have a real-life use?

  • Is this better than…

  • Reading about it?

  • Watching it?

  • Doing it?

When you consider those questions, you will be better positioned to determine and explain if augmented reality should become a reality for the students where you teach.

via Lisa Nielsen: The Innovative Edu… http://ift.tt/2yI8Xax

10 Reasons Kids Should Learn to Code

Learning about Computational Thinking, often referred to as coding (which is really the “written” part of process), is a new literacy that is overlooked for myriad reasons: “It’s too hard”, “I don’t understand it so, it will be impossible to teach”, “It doesn’t fit into any curricular area”, “There is no math in it at all”, “It’s just not appropriate for little ones”. I’ve pretty much heard the gamut of reasons why this process, not dissimilar to Design Thinking or Inquiry processes taking placing in Making/Tinkering and STEAM environments, is not viable in classrooms today. The reality is that computation thinking is a YAIEP or Yet Another Inquiry Entry Point. This should be a comforting thing for most. Inquiry and more recently Design Thinking are processes have been used extensively in the STEAM and Maker Movements that has swept educational institutions. These programs feature pedagogy that empower students to take more responsibility for their learning pathway; directing their learning through questions and personal perspectives; try to find and solve unique problems that have meaning and importance them; collaborating together to makes sense of data collected; communicating with authentic audiences and experts to share and obtain information; demonstrate their understandings in unique ways. This is Computational Thinking at it’s best as well. But there are added benefits as well and the article highlights these beautifully….  (Keith Strachan)


Word Splash of Coding Words

10 Reasons Kids Should Learn to Code

When it comes to preparing your children for the future, there are few better ways to do so than to help them learn to code! Coding helps kids develop academic skills, build qualities like perseverance and organization, and gain valuable 21st century skills that can even translate into a career. From the Tynker blog, here are the top 10 reasons kids should learn to code:

Coding Improves Academic Performance

  1. Math: Coding helps kids visualize abstract concepts, lets them apply math to real-world situations, and makes math fun and creative!
  2. Writing: Kids who code understand the value of concision and planning, which results in better writing skills. Many kids even use Tynker as a medium for storytelling!
  3. Creativity: Kids learn through experimentation and strengthen their brains when they code, allowing them to embrace their creativity.
  4. Confidence: Parents enthusiastically report that they’ve noticed their kids’ confidence building as they learn to problem-solve through coding!

Coding Builds Soft Skills

  1. Focus and Organization: As they write more complicated code, kids naturally develop better focus and organization.
  2. Resilience: With coding comes debugging – and there’s no better way to build perseverance and resilience than working through challenges!
  3. Communication: Coding teaches logical communication, strengthening both verbal and written skills. Think about it: learning code means learning a new language!

Coding Paves a Path to the Future

  1. Empowerment: Kids are empowered to make a difference when they code – we’ve seen Tynkerers use the platform to spread messages of tolerance and kindness!
  2. Life Skills: Coding is a basic literacy in the digital age, and it’s important for kids to understand – and be able to innovate with – the technology around them.
  3. Career Preparation: There’s a high demand for workers in the tech industry; mastering coding at a young age allows kids to excel in any field they choose!

Tynker makes it fun and easy for kids to learn how to code! Kids use Tynker’s visual blocks to begin learning programming basics, then graduate to written programming languages like Python, Javascript, and Swift. Our guided courses, puzzles, and more ensure that every child will find something that ignites their passion for learning. Explore our plans and get your child started coding today!

via www.tynker.com http://ift.tt/2i2cGVZ

Is DNA the future of data storage? – Leo Bear-McGuinness

Check out our Patreon page: http://ift.tt/2v1FEd5 View full lesson: http://ift.tt/2fX7DFW In the event of a nuclear fallout, every piece of digital and written information could all be lost. Luckily, there is a way that all of human history could be recorded and safely stored beyond the civilization’s end. And the key ingredient is inside all of us: our DNA. Leo Bear-McGuinness explains. Lesson by Leo Bear-McGuinness, animation by TED-Ed. Thank you so much to our patrons for your support! Without you this video would not be possible. Sdiep Sriram, Hachik Masis Bagdatyan, Matteo De Micheli, Alex Schenkman, Kostadin Mandulov, Miami Beach Family, David & Pamela Fialkoff, Ruth Fang, Mayra Urbano, Brittiny Elman, Tan YH, Vivian James, Ryohky Araya, Mayank Kaul, Steven LaVoy, Adil Abdulla, Megan Whiteleather, Mircea Oprea, Jen, Paul Coupe.
From: TED-Ed

via TED Education https://www.youtube.com/watch?v=r8qWc9X4f6k

Jigsaw variant – Pulsing

Pulsing is a jigsaw variant that allows students to benefits from the “hive” mind, but also insists on individual accountability in terms of project and task completion.

I use pulsing a lot for research…. I have attached an example I used with a grade 7 class doing an inquiry on creating a fully functional island with government, a people, culture, population  centre, etc… .

My belief is that structures such as this address the following learning structure considerations…

  1. Student Voice
  2. Accountability
  3. Broadening Perspectives
…and are vitally important in an educational landscape. See below.

NEW eBOOK – SEESAW ENTRY POINTS

Learning Technologies Support would like to take a moment to thank all the teachers and students who made this eBook possible. They have worked tirelessly and extremely hard to both learn the Seesaw tool and have continued to refining and perfecting already solid assessment for learning practices to fit with this new process portfolio/assessment for learning management tool.

The examples shared, highlight various aspects of student & teacher learning reflected on in Nursery through Grade 6. It is exciting to see how insightful and detailed some of the reflections and insights are.

We are beginning to see teachers and students making connections to outcomes and criteria in more purposeful, direct and meaningful ways during the reflection and posting process in Seesaw in Chapters 7, 8, 9, 10, 11, 12. This is not to say that this isn’t being done daily at the classroom level, rather the processes in place in the classroom have not yet fully transferred into the Seesaw environment. Hopefully, the training provided over the course of this year and next (also outlined in Chapter 2) will help with this.

The examples in Chapters 5, 7, 8, 9, 10, 11, 12 also demonstrate current practices that one might expect to see in evidence in Winnipeg School Division classrooms today: Inquiry, Design Thinking, Computational Thinking, the 6 Cs, and so on.

There is plenty of evidence of creative connections with parents in Chapter 4: conversations about learning, education, upcoming events, past events, & friendly, community building conversations.

Mobile Learning seems alive and well. Chapter 3 highlights examples of App- & Media- Smashing where learners are demonstrating their creativity and inventiveness when designing and working on completing their tasks. It was encouraging to especially see examples where both various media (dance or clay) was used in conjunction with a digital medium (video or animation).

Overall, the Seesaw implementation is progressing well. Please use this resource as a guide to assist you and your class in creating powerful, learning focused, reflective posts guided by co-created criteria, outcomes and clear tasks for the Seesaw Learning Journals your students will be creating.

The eBook itself is designed to be viewed on an eReader of some kind (iBook, Adobe Editions, BlueFire Reader, and the like) either on a mobile device like a phone or tablet or laptop. Within a short period of time this book may be deployed to all “open” or “non-student” iPads in the Division, hopefully directly in the iBook reader. But it can also be downloaded an installed via the portal at the following link here… Evidence of Learning in Seesaw iBook, or over in the Digital Portfolio section of the our portal site. I will provide a tutorial to lead you through this at the following link… SEESAW: How to Download & install a Seesaw eBook…