Learning

The Brave New (Wired) World of Online Education

iphone on table

It is a brave new world, indeed, in which milk, cars, and spouses can all be acquired via the Internet. But for all our advances, the jury is still out regarding the most effective ways to teach online.

Many online learning platforms consist of passive video lectures and podcasts, or universities repackaging classes for the web. To illustrate, imagine you have students who have never seen a pizza before and want to learn how to make one. Working with current online teaching methods, they’d likely not throw the dough, choose the toppings, or get feedback on their work. They would probably have to sit quietly through written descriptions and video lectures online.

The prevalence of this passive approach demonstrates a key challenge in the pursuit of engaging, effective web-based education: the issue of interactivity. While more studies are showing that interactivity breeds engagement and information retention, instructors and platforms are still struggling to employ effective levels and modes of interactivity.

Researchers at Columbia University’s Community College Research Center examined 23 entry-level online courses at two separate community colleges and made some interesting discoveries on this phenomenon. Their assessment was that most of the course material was “text-heavy” and that it “generally consisted of readings and lecture notes. Few courses incorporated auditory or visual stimuli and well-designed instructional software.” While technology that supported feelings of interpersonal interaction was found to be helpful, mere incorporation of technology was insufficient—and recognized as such by the students. The research noted that, “Simply incorporating technology into a course does not necessarily improve interpersonal connections or student learning outcomes.”

The research specifically called out message boards (where instructor presence and guidance was minimal) to be insufficiently interactive to engage students in a way that they found clear and useful. The consensus of their research was that “effective integration of interactive technologies is difficult to achieve, and as a result, few online courses use technology to its fullest potential.”

Another interesting look at web-based learning and interactivity is a 2013 study conducted by Dr. Kenneth J. Longmuir of UC Irvine. Motivated by the fact that most “computerized resources for medical education are passive learning activities,” Professor Longmuir created his own online modules designed for iPad (and other mobile devices). These three online modules replaced three of his classroom lectures on acid-base physiology for first-year medical students. Using a Department of Defense handbook as his guide for incorporating different levels of activity, Longmuir utilized text and images side-by-side and had an embedded question and answer format. From student comments, “The most frequent statement was that students appreciated the interactive nature of the online instruction.” In fact, 97% of surveyed students said it improved the learning experience. They reported that not only did the online material take a shorter time to master than in-person lectures, but the interactivity of the modules was the “most important aspect of the presentation.”

While Dr. Longmuir was reluctant to draw hard conclusions about this particular online course’s efficacy (due to variables in student procrastination, students skipping important material, etc.), there are a few clear points to be taken from both studies. For one, engaging, interactive content is the exception, not the rule, in today’s online learning environment. Both studies suggest the importance of interactivity in online learning—if not definitively in test results (though that’s a possibility), certainly in how students feel about their engagement with the material. This isn’t surprising since research is showing that lack of interactivity in traditional classrooms is detrimental, as well.

While the science behind producing effective online learning courses is still in development, the need for meaningful interactivity in new educational technology seems like a no-brainer. If we hope to teach our students to make that pizza, the most effective way is not to drown them in video clips and PDF files; we should create online learning experiences that mimic—or even improve upon—the interactivity and satisfaction that pounding the dough themselves would provide.

 

Standard
Company

Pedago Announces Partnership with Top Business School INSEAD

Today we’re very excited to announce a new partnership with INSEAD, one of the leading business schools globally.

INSEAD, the pioneer institution to offer MBA programs in Europe over fifty years ago, is given superior rankings by Forbes, Financial Times, and Business Insider, and is ranked number one in Europe and Asia-Pacific by the QS Global 200 Business Schools Report (registration required to view), which ranks institutions according to the preferences of over 4,000 actively hiring MBA employers across the world. INSEAD faculty created Blue Ocean Strategy, a revolutionary and highly-celebrated approach to business modeling, and the founder of INSEAD, Georges Doriot, is dubbed the “father of venture capitalism.” In short, they’re kind of a big deal, and we’re honored to be working with them!

INSEAD holds cutting-edge research and innovation in teaching as foundational pillars of their institution, and in line with these core values, they’ve offered us the opportunity to work closely with them and their incoming students to explore the ever-expanding and changing world of online education and educational technology. At Pedago, we believe that technology can accelerate learning outcomes by enabling education wherever the learner may be. We strive to create a more fulfilling and effective online experience.

We’d like to take this opportunity to welcome INSEAD students of the class of 2015 to our program. We thank in advance all participants for being a part of this milestone in our development.


Questions, comments? You should follow us on Twitter here.

Standard
Engineering

Build and Deploy with Grunt, Bamboo, and Elastic Beanstalk

In response to Twitter feedback on our recent post “Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline,” we’d like to share an overview of our current build and deploy process.

It goes a little something like this:

Local development environment

We currently have a single git-managed project containing our Rails server at the top level and our Angular project in a subdirectory of vendor. Bower components are checked in to our repo to speed up builds and deploys. The contents of our gruntfile and the organization of our asset pipeline are described here.

We can start up our server via grunt server (which we have configured to shell out to rails server) or directly with rails server for Ruby debugging.

Even though both the client and server apps are checked into the same project and share an asset pipeline, we restrict our Angular code to only communicate to the backend Rails server over APIs. This enforces a clean separation between client and server.

Bamboo build project

When Angular and Rails code is checked in to master, our Bamboo build process runs. We always push through master to production, à la the GitHub flow process. The build process comprises two stages:

Stage 1: Create Artifacts:

  • Rails: bundle install and freeze gems.
  • Angular: npm install, grunt build. No bower install is needed because we check-in our bower_components. The grunt build step compiles, concatenates, and minifies code and assets. It also takes the unusual step of cache-busting the asset filenames and rewriting any references in view files to point to the new filenames.
  • The resulting artifact is saved in Bamboo and passed to Stage 2.

Stage 2: Run Tests:

  • Rails: run rspec model and controller tests, and then cucumber integration tests. It was a bit tricky to get headless cucumber tests running on Bamboo’s default Amazon AMI; see details in our previous blog post.
  • Angular: grunt test.

If the artifact creation succeeds, and the tests run on that artifact all pass, Bamboo triggers its associated deploy project. Otherwise, our team receives notifications in Hipchat of the failure.

Bamboo deploy project

After every successful build, Bamboo is configured to automatically deploy the latest build to our staging environment.

The Bamboo deployment project runs the following tasks to kick off an Elastic Beanstalk deployment:

  1. Write out an aws_credentials file to the build machine. We don’t store any credentials on our custom AMIs. Instead, we keep them in Bamboo as configuration variables and write them out to the build machine at deploy time.

  2. Run Amazon’s AWSDevTools-RepositorySetup.sh script to add aws.push to the set of available git tasks on the build machine.

  3. Kick off the deployment to our Elastic Beanstalk staging environment with a call to git aws.push from the build machine’s project root directory.

Since our project is configured to use Elastic Beanstalk, the remaining deployment-related configuration (like which Elastic Beanstalk project and stage to push the update to) is checked in to the .elasticbeanstalk and .ebextensions directories in our project and made available to the git aws.push command. If there is interest in sharing the contents of these config files, please let us know on Twitter.

Elastic Beanstalk staging environment

After the staging deployment has been kicked off by Bamboo, we can head over to our EB console at https://console.aws.amazon.com/elasticbeanstalk and monitor the deployment while it completes. The git aws.push command from the previous step is doing the majority of the work behind the scenes. For staging, we use Amazon’s default Rails template, and “Environment type: Single instance.” Amazon’s default Rails template manages Rails processes on each server box with a passenger + nginx proxy.

When we first decided to go to a grunt-based asset pipeline, we worried this might impact the way we deployed our servers. In fact, it does not. Our git code bundle containing our Rails app, Angular front-end, and shared assets is deployed to Elastic Beanstalk via git aws.push, exactly as it was prior to our grunt-based asset pipeline switch.

We then do smoke testing on our staging environment.

Elastic Beanstalk production environment

After we have determined the staging release is ready to go to production, we promote the current code bundle from staging to production simply by loading up the EB console for the production stage of our project, clicking “Upload and Deploy” from the Dashboard, clicking “All Versions” in the popup, then selecting the git version currently deployed to staging.

For production, we use Amazon’s default Rails template, and “Environment type: Load balanced, auto scaling.” Elastic Beanstalk takes care of rolling updates with configured delays, aka no-downtime deployments.

Wrap up

The above system, combined with the grunt-based asset pipeline described in our previous post, allows us to iterate and deploy with confidence. Future work will focus on improving deploy times, perhaps by baking AMIs or exploring splitting our monolithic deployment artifact into multiple pieces, e.g., code and assets, npm packages, etc.


Curious about Pedago? Enter your email address to be added to our beta list.

Questions, comments? You should follow us on Twitter here.

Standard
Engineering

Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline

This is the first in a two-part series about our build and deploy process. See Part 2 here.

Like any good startup, we try to leverage off-the-shelf tools to save time in our development process. Sounds simple enough, but the devil is in the details, and sometimes a custom solution is worth the effort. In this post, I’ll describe how and why we replaced the Rails asset pipeline with a Grunt-based system.

In the Beginning…

Early on, we embraced AngularJS as the foundation of our core application. We started prototyping using the Yeoman project and never looked back. If you’ve never used this project before, I highly recommend checking it out. It will save you time and tedium in setting up a development ecosystem. We fell in love with the Bower and Grunt utilities as a way to manage project dependencies and build pipelines, and we found the array of active development on the various supporting toolsets impressive. We were knee deep in NodeJS land at this point.

After we stubbed out a good portion of the UI on mock data, we had to start looking towards building out an API that could take us into further iteration. Ruby on Rails was proven and familiar, and we knew how to carve out a reliable backend in no time flat. Additionally, we wanted to take advantage of some proven RubyGems to handle common tasks for which the NodeJS web ecosystem hadn’t fully established itself. Some of these tasks include handling view responsibility, and as such relied on Sprockets for asset compilation.

At this point, we had an AngularJS project, built and managed with Grunt, contained within a Rails project, built and managed with Rake and Sprockets.

Trouble in Paradise

We quickly found ourselves hitting a wall trying to manage these two paradigms. As have several others.

Our hybrid Grunt + Sprockets asset pipeline included multiple build processes and methods of shuffling assets. The more we tried to get these two jealous lovers to play nice, the more they fought. Ultimately the final straw came down to minification-induced runtime errors and the lack of sourcemap compilation support in Sprockets (while somewhat supported in an on-going feature branch, sourcemaps hadn’t made it into master and required dependency changes we weren’t ready to make quite yet).

At this point it became apparent that we were wasting precious cycles dealing with things outside our core competency, and that we needed to unify these pipelines once and for all.

Unification

Our solution: say goodbye to Sprockets! We have completely disabled the traditional Rails asset pipeline, and now rely on GruntJS for all things assets-related. The deciding factors for us were the community activity and the flexibility the project provided. Here’s a Gist of our (slightly sanitized) Gruntfile.js powering the whole pipeline.

How we currently work:

  • We don’t use the Rails asset helpers…at all. We use vanilla HTML for our views as much as possible. Attempts to use the Rails asset helpers ended up being overly complex and ultimately felt like trying to work a square peg into a round hole.
  • We reference the compiled scripts and styles (common.js, app.js, main.css, etc) directly in our Rails layouts.
  • Grunt build and watch tasks handle the the pipeline actively and passively. In development, we use the wrapper task grunt server to launch Rails along with our watches. Source and styles are compiled and published directly to Rails as they are saved. Likewise, unit tests are run continually with output to console and OSX reporters.
  • LiveReload refreshes the browser or injects CSS whenever published assets are updated or otherwise modified.
  • We no longer require our Rails servers to perform any sort of asset compilation at launch, as they’re now built by CI with the command grunt build prior to deployment. Nothing structural in our build deployment process has changed (in our case, using Bamboo to deploy to Elastic Beanstalk).

With the above, we are now constantly testing using the assets that actually make it into a production environment, with sourcemap support to handle browser debugging sessions. Upon deployment, Rails instances do not need to pre-process static assets, reducing warm-up time.

Ultimately, the modular nature of the Grunt task system ensures we have a huge array of tools to work with, and as such, we’ve been able to incorporate all the nice little things that Sprockets does for us (including cache-busting, and gzip compression) and the things it doesn’t (sourcemaps).

DIY

Feel free to steal our Gruntfile.js if you’re looking to adopt this system. We’ve also cobbled together a list of Grunt tasks that we’ve found helpful:

  • grunt-contrib-watch – the glue that binds automated asset compilation together.
  • grunt-angular-templates – allows us to embed our AngularJS directive templates into our javascript amalgamation. Also useful for testing.
  • grunt-contrib-uglify – handles all JS concatenation, minification, and obfuscation. Despite adhering to AngularJS minification rules, we’ve found issues with the mangle parameter and must disable that flag when handling Angular code. Uglify2JS is also providing our sourcemaps.
  • grunt-contrib-compass – we only author SCSS and rely on Compass to handle everything concerning our styles, including compilation and minification as well as spritesheet and sourcemaps generation.
  • grunt-autoprefixer – …except we don’t bother writing browser-specific prefixes. Instead we use autoprefixer to automatically insert them. The recent version supports sourcemap rewrites.
  • grunt-cache-bust – renames assets to CDN friendly cache-busted filenames during distribution.
  • grunt-contrib-jshint + grunt-jsbeautifier – keeps our code clean and pretty.
  • grunt-karma – is constantly making sure we write code that works as intended.
  • grunt-todos – reminds us not to litter.  =]

Learn more about our build and deploy process in Part 2 of this series.

We hope this guide helps others trying to marry these two technologies. Please feel free to contribute with suggestions for future improvements via GitHub or Twitter!


Curious about Pedago? Enter your email address to be added to our beta list.

Questions, comments? You should follow us on Twitter here or on Facebook here.

Standard