{"API Management"}

Blog Posts on API Management

These are posts from the API Evangelist blog that are focused on API management, allowing for a filtered look at my analysis on the topic. I rely on these posts, along with the curated, organizations, APIs, and tools to help paint me a picture of what is going on.

Your Wholesale API For Sale In The Major API Marketplaces

I have been talking about selling wholesale APIs for some time now, allowing your potential customers to pick and choose exactly the API infrastructure they need, and develop their own virtualized API stacks. I'm not talking about publishing your retail API into marketplaces like Mashpe, I'm talking about making your API deployable and manageable on all the major cloud providers. 

You see this shift in business with a recent AWS email I got telling me about multi-year contracs for SaaS and APIs. Right now there are 70 SaaS products on AWS Marketplace, but from the email I can tell that Amazon is really trying to expand it's API offerings as well. When you deploy an API solution using the AWS Marketplace, and a customer signs up for a one, two, or three year contract, they don't pay for the underlying AWS infrastructure, just for the SaaS, or API solution. I will have to expore more to see if this is just absorbed by the API operator, or AWS working to incentivize this type of wholesale API deployment in their marketplace, and locking in providers and consumers.

I'm still learning about how Amazon is shifting the landscpe for deploying and managing APIs in this wholesale, almost API broker type of way. I recently came across the AWS Serverless API Portal, which is meant to augment the delivery of SaaS or API solutions in this way. With this model you could be in the business of deploying API developer portals for companies, and fill ingthe catalog with a variety of wholesale API resources, from a varietiy of providers--opening up a pretty interesting opportunity for white label APIs, and API brokers.

As I'm studying this new approach to deploying and managing APIs using marketplaces like this, I'm also noticing a shift towards deliving more algorithmic APIs, with machine learning, artificial intelligence, and other voodoo as the engine--resulting in a shift towards machine learning API marketplaces. I really need to carve off time to think about API deployment and management in this way. I've already begun looking at what it takes to deploy bare bones, wholesale APIs using AWS, Google, Heroku, or Azure clouds, but I really haven't invested much in the business side of all of this, soewhere Amazon seems to be slightly ahead of the curve in.


API Rate Limiting At The DNS Layer

I just got an email from my DNS provider CloudFlare about rate limiting and protecting my APIs. I am a big fan of CloudFlare, partly because I am a customer, and I use to manage my own infrastructure, but also partly due to the way they understand APIs, and actively use them as part of their business, products, and services.

Their email spans a couple areas of my research that I find interesting, and extremely relevant: 1) DNS, 2) Security, 3) Management. They are offering me something that is traditionally done at the API management layer (rate limiting), but now offering to do it for me at the DNS layer, expanding the value of API rate limiting into the realm of security, and specifically in defense against DDoS attacks--a serious concern.

Talk about an easy way to add value to my world as an API provider. One that is frictionless, because I'm already depending on them for the DNS layer of my web, and API layers of operations. All I have to do is signup for the new service, and begin dialing it in for my all of my APIs, which span multiple domains--all conveniently managed using CloudFlare.

Another valuable thing CloudFlare's approach does, in my opinion, is to reintroduce the concept of rate limiting to the world of websites. This helps me in my argument that companies, organizations, institutions and government agencies should be considering having APIs to alleviate website scraping. Using CloudFlare they can now rate limit the website while pointing legitimate use cases to the API where their access can be measured, metered, and even monetized when it makes sense.

I'm hoping that CloudFlare will be exposing all of these services via their API, so that I can automate the configuration of rate limiting for my APIs at the DNS level using APIs. As I design and deploy new API endpoints I want them automatically protected at the DNS layer using CloudFlare. I don't want to have to do extra work when it comes to securing and managing web or API access. I just want a baseline for all of my operations, and when I need I can customize per specific domains, or down to the individual API path level--the rest is automated as part of my continuous integration workflows.


How I Can Help Make Sure Your API Is Ready For Use

As one of my clients is preparing to move their API from deployment to management, I'm helping them think through what is necessary to make sure their API is ready for use by a wider, more public group of developers. Ideally, I am brought into the discussion earlier on in the lifecycle, to influence design and deployment decisions, but I'm happy to be included at any time during the process. This is a generalized, and anonymized version of what I'm proposing to my client, to help make sure their API is ready for prime time--I just wanted to share with you a little of what goes on behind the scenes at API Evangelist, even when my clients aren't quite ready to talk publicly.

External Developer Number One
The first place I can help with the release of your API is when it comes to being the first external developer and fresh pair of eyes on your API. I can help with signing up, and actually making calls against every API, to make sure things are coherent and stable before putting in the hands of 3rd party developers at a hackathon, amongst partners, or the general public. This is a pre-requisite for me when it comes to writing a story on any API or doing additional consulting work, as it puts me in tune with what an API actual does, or doesn't do. The process will help provide you with a new perspective on the project after you have put so much work into the platform--in my case, it is a fresh pair of eyes that have on-boarded with 1000s of APIs.

Crafting Your API Developer Portal
Your operations will need a single place to go to engage with everything API. I always recommend deploying API portals using Github Pages, because it becomes the first are to engage with developers on Github, as part of your overall API efforts. Github is the easiest way to design, develop, deploy, and manage the API portal for your API efforts. I suggest focusing on all of the essential building blocks that any API operations should possess:

  • Landing Page
    • Tag Line - A short tagline describing what is possible using your API.
    • Description - A quick description (single paragraph) about what is available.
  • On-boarding
    • Signup Process - A link to the sign-up process for getting involved (OpenID).
    • Getting Started - A simple description, and numbered list of what it takes to get started.
    • Authentication Overview - A page dedicated to how authentication works.
    • FAQ - A listing of frequently asked questions broken down into categories, for easy browsing.
  • Integration
    • Documentation - Interactive documentation generated by the swagger / OpenAPI definition.
    • Code - Conde sample, or software development kits for developers to put to work.
    • Postman Collection - A Postman Collection + Button for loading up APIs into Postman client.
  • Support
    • Github - Set up Github account, establish profile, and setup portal as the first point of support.
    • Twitter - Set up a Twitter account, establish a presence, and make know open for business.
    • Email - Establish a single, shared email account that ca provide support for all developers.
  • Communications
    • Blog - Establish a blog using Jekyll (easy with Github Pages), and begin telling stories of the platform.
    • Twitter - Get the Twitter account in sync with communication, as well as support efforts.
  • Updates
    • Roadmap - Using Github issues, establish a label, and rhythm for sharing out the platform roadmap.
    • Issues - Using Github issues, establish a label, and rhythm for sharing out current issues with the platform.
    • Change Log - Using Github issues, establish a label and rhythm for sharing out changes made to the platform.
    • Status - Publish a monitoring and status page keeping users in tune with the platform stability and availability.
  • Legal
    • Terms of Service - Establish the terms of service for your platform.
    • Privacy Policy - Establish the privacy policy for your platform.

All of these building blocks have been aggregated from across thousands of APIs, and are something ALL successful API providers possess. I recommend starting here. You will need this as a baseline to get up and running with developers, whether generally on the web or through specific hackathons and your private engagements. Being developer number one, and helping craft, deploy, and polish the resources available via a coherent developer portal are what I bring to the table, and willing to set aside time to help you make happen.

Additionally, I'm happy to set into motion some other discussions regarding pushing forward your API operations:

  • Discovery - Establish a base discovery plan for the portal, including SEO, APIs.json development
  • Validation - Validate each API endpoint, and establish JSON assertions as part of the OpenAPI & testing.
  • Testing - Establish a testing strategy for not just monitoring all API endpoints, but make sure they return valid data.
  • Security - Think about security beyond just identity and API keys, and begin scanning API endpoints, and looking for vulnerabilities.
  • Embeddable - Pull together a strategy for embeddable tooling including buttons, badges, and widgets.
  • Webhooks - Consider how to develop a webhook strategy allowing information to be pushed out to developers, reducing calls to APIs.
  • iPaaS - Think through how to develop an iPaaS strategy to allow for integration with platforms like Zapier, empowering developers and non-developers alike.

This is how I am helping companies make sure their APIs are ready for prime time. I regularly encounter many teams who have great APIs but have been too close to the ovens baking the bread, and find it difficult to go from development to management in a coherent way. I have on-boarded and hacked on thousands of APIs. I have been doing this for over a decade, and exclusively as a full-time job for the last seven years. I am your ideal first developer and can save you significant amounts of time when it comes to crafting and deploying your API portal.

As a one person operation, I don't have the time to do this for every company that approaches me, but I'm happy to engage with almost everyone who reaches out, to understand how I can best help. Who knows, I might help prevent you from making some pretty common mistakes, and I am happy to be a safer, early beta user of your APIs--one tha will give you the feedback you are looking for.


Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*


There Is More To This Than Just Having An API

There is a reason why I encourage API providers to look at not just the technology of APIs but also invest heavily into the business and politics of API operations. There is a reason I evangelize a more open, web-based approach to doing APIs, even if you are peddling hardware and device APIs. It is because there are a number of human-centered elements present when doing APIs, that will define your services, and ultimately contribute to whether or not they are a success or a failure.

One example of this from my API news curation archives is from the Sonos API ecosystem, and a pretty big blunder in communication the audio device platform made late last year, that is significantly impacting their partnerships in 2017.  Directly from the CEPro article:

A collective cheer roared from home-technology installers at CEDIA Expo 2016, when Sonos announced an API for home-automation integration starting with Control4 (Nasdaq: CTRL), CrestroniPortLutron and Savant.

These partners – and most other respectable smart-home systems providers – have integrated with Sonos for many years, albeit with unsanctioned drivers created through reverse-engineering of a fairly straightforward UPnP-based protocol.

But the new API kind of snuck up on dealers and vendors alike, with their customers waking up to a brand new Sonos experience in late December, courtesy of an auto-update by Sonos.

The new experience was inferior to the original, with users unable to access Spotify or Amazon Music from the home automation system, except to select favorites created through Sonos’s own app.

When you are operating an API that many different businesses depend on, communication is essential. this is why I advocate that API providers always have a clear communication and support strategy, as well as the road map, issue management, and change log processes. Every single change has to be considered for its impact on the community, and you have to have a plan for how you will be communicating and supporting your API consumers needs around a change. 

This is also why API providers should be understanding the benefits of hypermedia when it comes to change management. Hypermedia design patterns provide you with a more honest approach to dealing change, one that helps make your partner's clients more fault tolerant. It is well worth the time learning about the handful of leading hypermedia media types. Any one of them would have helped Sonos manage change.

There are multiple tools in the API toolbox to help you manage change. In the en,d the most effective tools involve human to human interaction, and actually talking to your partners early on about change, and making sure you have a robust communication strategy throughout your API lifecycle. Us engineers like to think it is the API technology making the magic happen, but in the end, there is more to this than just having application programming interfaces, it is about also having the right human interfaces.


The AWS Serverless API Portal

I was looking through the Github accounts for Amazon Web Services and came across their Serverless API Portal--a pretty functional example of a forkable developer portal for your API, running on a variety of AWS services. It's a pretty interesting implementation because in addition to the tech of your API management it also helps you with the business side of things. 

The AWS Serverless Developer Portal "is a reference implementation for a developer portal application that allows users to register, discover, and subscribe to your API Products (API Gateway Usage Plans), manage their API Keys, and view their usage metrics for your APIs..[]..it also supports subscription/unsubscription through a SaaS product offering through the AWS Marketplace."--providing a pretty compelling API portal solution running on AWS.

There are a couple things I think are pretty noteworthy:

  • Application Backend (/lambdas/backend) - The application backend is a Lambda function built on the aws-serverless-express library. The backend is responsible for login/registration, API subscription/unsubscription, usage metrics, and handling product subscription redirects from AWS Marketplace.
  • Marketplace SaaS Setup Instructions - You can sell your SaaS product through AWS Marketplace and have the developer portal manage the subscription/unsubscription workflows. API Gateway will automatically provide authorization and metering for your product and subscribers will be automatically billed through AWS Marketplace
  • AWS Marketplace SNS Listener Function (Optional) (/listener) - The listener Lambda function will be triggered when customers subscribe or unsubscribe to your product through the AWS Marketplace console. AWS Marketplace will generate a unique SNS Topic where events will be published for your product.

This is the required infrastructure we'll need to get to what I've been talking about for some time with my wholesale API and virtual API stack stories. Amazon is providing you with the infrastructure you need to set up the storefront for your APIs, providing the management layer you will need, including monetization via their marketplace. This is a retail layer, but because your infrastructure is setup in this way, there is no reason you can't sell all or part of your setup to other wholesale customers, using the same AWS marketplace.

I had AWS marketplace on my list of solutions to better understand for some time now, but the AWS Serverless Developer Portal really begins to connect the dots for me. If you can sell access to your API infrastructure using this model, you can also sell your API infrastructure to others using this model. I will have to set up some infrastructure using this approach to better flush out how AWS infrastructure and open templates like this serverless developer portal can help facilitate a more versatile, virtualized, and wholesale API lifecycle. 

There is a more detailed walkthrough of how to get going with the AWS Serverless Developer Portal, helping you think through the details. I am a big fan of these types of templates--forkable Github repositories, with a blueprint you can follow to achieve a specific API deployment, management, or any other lifecycle objective.


I've been thinking about the concept of a wholesale API for some time. Going beyond how we technically deploy our APIs, and focusing more on how we can provide a wholesale version of the same API resources, with accompanying terms of services that go beyond just a retail level of API access in the cloud. Not all APIs fit into this category of API, but with the containerization of everything, and the evolving world of Internet of Things (IoT), there are many new ways in which API resources are being deployed.

You can see this evolution in how we are deploying APIs present in one of the latest API deployment platforms I added to my API deployment research, Nanoscale.io. This image is just a portion of their platform, but the separation of deployment concerns articulates the technical side of what I'm talking about, we just need to add in considerations for the business and political side of how this works.

We've seen API deployment move from on-premise and back again, and now we are seeing it move onto everyday objects like cameras, printers, routers, and other everyday objects. I'm watching service providers like Nanoscale.io emerge to help us deploy our APIs exactly where we need them. I'm guessing that the companies who have their business models in similar order, allowing for API service composition from the management layer to further slide down the stack to the deployment layer, will come out ahead.


An API Discovery API For Your API With Tyk

If you are selling services to the API space you should have an API, it is just how this game works (if you are savvy). I was going through Tyk's API for their open source API management solution and came across their API definitions API, which gives you a list of APIs for each Tyk deployment--baking in API discovery into the open source API management solution by default.

The API API (I still enjoy saying that) gives you the authentication, paths, versioning, and other details about each API being managed. I'm writing about this because I think that an API API should be the default for all API service providers. If you are selling me API services you should have an API for all your services, especially one that allows me to discover and manage all the APIs I'm applying your service to. 

I am expanding my definition of a minimum viable blueprint for API service providers, and adding an API API as one of the default APIs. I'm going to be adding the account, billing, and a handful of other essential APIs to my default definition. If I'm using your service to manage any part of my API operations, I need to be automating discovery, management, and billing in our relationship.

It seems obvious to me but I'm looking to provide a simple checklist that other API service providers can consider as they craft their strategy. My goal is to help make sure each stop along the lifecycle can be orchestrated in a programmatic way like Tyk.

Disclosure: Tyk is an API Evangelist partner.


The Open Guide to Amazon Web Services

I keep an eye on things that are trending daily and weekly on Github because it is a great way to discover new companies and individuals doing interesting things with APIs. While looking at this earlier this week I came across the open guide to Amazon Web Services, a pretty robust, and well organized getting started guide to everything AWS.

Here is their description of this resource out of the leading cloud computing platform:

A lot of information on AWS is already written. Most people learn AWS by reading a blog or a “getting started guide” and referring to the standard AWS references. Nonetheless, trustworthy and practical information and recommendations aren’t easy to come by. AWS’s own documentation is a great but sprawling resource few have time to read fully, and it doesn’t include anything but official facts, so omits experiences of engineers. The information in blogs or Stack Overflow is also not consistently up to date.

This guide is by and for engineers who use AWS. It aims to be a useful, living reference that consolidates links, tips, gotchas, and best practices. It arose from discussion and editing over beers by several engineers who have used AWS extensively.

I find it interesting when API providers invest in authoritative solutions like this, acknowledging the scattered and often out of date nature of blogs, QA sites, forums, and the wealth of other self-service resources available for APIs. Amazon is getting seriously organized with their approach to provider resources for developers--they have been doing this a while, and know where the pain points are. 

Amazon's organized approach, the breaking down by service, and the usage of Github are all interesting things I think are worth noting as part of my research. AWS is a tough API pioneer to showcase because they have way more resources than the average API provider, but as one of the early leaders in the API space they possess some serious wisdom and practices that are worth emulating. I'll keep going through their open guide, and see what other patterns I can extract and showcase so that my readers can consider.


An Integrations Page For Your API Solution

A new way that I am discovering the new tech services that the cool kids are using is from the dedicated integrations pages of API service providers I track on. Showcasing the services your platform integrates with is a great way of educating consumers about what the possibilities are when it comes to your tools and services. It is also a great way for analysts like me to connect the dots around which services are most important to the average user.

API service providers like DataDog, OpsClarity, and Pingometer are providing dedicated integration pages showcasing the other 3rd party platforms they integrate with. Alpha API dogs like Runscope also have integration APIs, allowing you to get a list of integrations your team depends on (perfect for another story). I'm just getting going tracking on tracking the existence of these integration pages, but each time I have come across one lately I find myself stopping and looking through each of the services included.

Directly, API integrations provide a great way to inform customers about which of the other services they use can be integrated with this platform, potentially adding to the number of reasons why they might choose to go with a service. Indirectly, API integration pages provide a great way to inform the sector about which API driven platforms are important to service providers, and their customers. After I get a number of these integration pages bookmarked as part of my research, I will work on other stories showcasing the various approaches I find.


A Service Level Agreement API For API Service Providers

I am spending some time profiling the companies who are part of my API monitoring research, specifically learning about the APIs they offer as part of their solutions. I do this work so that I can better understand what API monitoring service providers are offering, but also for the discoveries I make along the way--this is how I keep API Evangelist populated with stories. 

An interesting API I came across during this work was from the Site24X7 monitoring service, specifically their service level agreement (SLA) API. An API for adding, managing, and reporting against SLA's that you establish as part of the monitoring of your APIs. Providing a pretty interesting API pattern that seems like it should be part of the default API management stack for all API providers.

This would allow API providers to manage SLA's for their operations, but also potentially expose this layer for each consumer of the API, letting them understand SLA"s that are in place, and whether or not they have been met--in a way that could be seamlessly integrated with existing systems. An API for SLA management for API providers seems like it could also be a standalone operation as well, helping broker this layer of the API economy, and provide a rating system for how well API providers are holding up their end of the API contract.


API Access To Your Account By Default But Requires Permission To See Others

I wrote about SoundCloud beginning to require approval before developers get access to any API resources yesterday, a concept that I want to keep exploring. I'm going to be going through the APIs track on, looking for different variations of this, but before I did this I wanted to explore a couple of approaches I already had rattling around in my head.

What if, when you first sign up for API access you only get access to your own data, and content? You couldn't get access to any other users until you were approved. It seems like something that would incentivize developers to publish data and content, build their profiles out, which is good for the platform right? It will also protect other end-users from malicious activity by random developers who are just looking to wreak havoc in support of their own objectives and do not care about the platform--like we saw with Soundcloud.

A good example of how this could be applied is evident in the post yesterday by Kris Shaffer on Medium, who was looking to get his content out of the platform. I use the Medium API to syndicate blog posts to Medium (POSSEE), but there is no read API allowing me to pull my content out--I agree with Kris, this is a problem. What if Medium opened up API access, allowing us platform users to get at our own content, but then required approval of any app before there ever is access to other users content?

Some food for thought. I hear a lot of platforms say they don't do APIs because they don't want to end up with the same problems as Twitter. I think this is the result of some legacy views about public APIs that should just go away. Not all APIs are created equal, and I feel that APIs shouldn't always be just about applications, and often times are just a lifeline for platform users, helping us end-users better manage their data and content. If my internal systems and other 3rd party systems are integrated together with APIs, the likelihood I will grow dependent on the integration only increases.


Working Tips and Tricks Into Your Regular API Evangelism Efforts

I usually don't have to look very far to find good examples of API evangelism in the field, because the best technology providers are usually pretty consistent and vocal about their practices--allowing me to just pluck from my feeds, and rework as a story for my API provider readership. One of the consistent sources for me out there is from the Docker community, and from what they like to call Docker Captains.

One of the things I see regularly from this Docker community leadership is the sharing of their platform tips and tricks and in-person and online meetups. This is definitely something I recommend other API providers do when possible, but I would also recommend working to integrate the concept into your regular evangelism activities like blogging, weekly newsletter, and Tweeting.

Maybe it is something you could also open up to the rest of the community. Allowing your trusted partners, and your favorite developers to share what their tips & tricks are around API integration and usage are. The idea of tips and tricks is a pretty basic thing, but if you are working to stay creative in producing content, while also keeping things in the realm of actually helping your API developers be successful--it is one that can go a long way each week at meetups, on your blog, newsletter, Twitter, and across all the other channels you are already using to reach developers.


Be Straight Up About Internal Challenges When Hiring Your API Talent

I see an increasing number of job postings on LinkedIn and other job websites from companies who are actively seeking an API rockstar, ninja, lead, owner, or product manager, and because of my connections in the space I know that some of the intent behind them are less than sincere. Don't get me wrong, I think ALL companies should be embarking on their API journey, and if that means bringing in outside talent--get it done!

My motivation in writing this post is to help companies be more realistic during their talent search, and hiring process, as well as internally with their teams. As an IT and lead developer veteran, I have been brought in to take the reins on a number of teams and I have seen a wide range of toxic situations. I understand the internal struggles exist in all companies, but the companies that were the worst for me, were the ones where I was blindsided by the depth of the entrenchment and struggle with leadership and internal teams, either because they were in denial or were straight up bullshitting me--in hopes I might be able to wave my magic wand and just fix everything.

I've confidentially heard many stories from API product leads, and evangelists after exiting a company, or sometimes while they are still in their positions, about how entrenched internal leadership is when it comes to "innovation" and "change"--all while putting on a good show that APIs are truly the priority. I understand that companies want to look innovative, hip, agile, flexible, and all the things often associated with APIs, but bringing in API talent, only to let them hit a brick wall because they were unprepared just isn't good business.

If you are going to say that you are doing APIs, and issuing press releases, and promising your customers, partners, and internal stakeholders that you are going to do APIs, make sure you properly prepare any talent you are looking to lead the charge. I'm not saying the API journey will be easy, and you shouldn't be embarking on this journey. I am just recommending that do not go around hiring API talent, only to blindside them upon entry with entrenched, unwilling to evolve internal actors...or if this is the case just make sure you set the stage properly during the hiring process.


The API Evangelist API Management Guide

API management is the oldest area of my research. The area has been being defined since Mashery first opened its doors in 2006 and continues to be defined with the recent IPO by Apigee, and the entry of AWS into the sector with the AWS API Gateway. API management is often defined by the services offered by these companies, but also by the strategies of leading API providers in the space.

My API management research walks through the common building blocks of the space, the open source tooling, and API service providers who are serving the space. I first started this guide in 2010 and 2011 and have worked to evolve with each release, as well as the fast pace change that seems to occur in the space.

This guide touches on, and often overlaps with my other areas of research (as everything was born out of this), but should provide you with what you need as a checklist for evolving your existing API management strategy, or help you craft a brand new one for your new API. This guide has a heavy focus on a service provider led approach to API management, but with the growth in open source solutions lately, I'm working to evolve more towards a DIY approach to making it work.

I fund my research through consulting, selling my guides, and the generous support of my sponsors. I appreciate your support in helping me continue with my work and hope you find it relevant in your API journey.

Purchase My API Management Guide


Automating API Key Management Using API Service Provider APIs, And Other Open Source Solutions

I'm working my way through some of the low hanging fruit in my API notebook, when it comes to stories, and found a story thread I was working on regarding automating API key management. I'm personally storing my keys, across the private master branch for my API reps, because I don't have any super-sensitive data, and it helps me manage across hundreds of APIs, in a central way

I've talked about the need to automate API key management in the past--with the number of APIs we are using, to reach the level of security we will need, the lower level of keys will need a global refresh and management process. This level of keys most likely won't ever result in large scale security breaches, but will cause plenty headaches for both API providers and consumers.

If you use one of the following API management solutions, they provide you with an API for managing your API keys:

This will help you manage your keys, if you are an API provider, but doesn't do a lot for you to manage your API keys across providers, as an API consumer. Amazon provides a key management solution, but at first glance it appears to be something you can use to manage keys across your AWS infrastructure (am I wrong?)--which makes sense for supporting AWS objectives. ;-)

When I wrote my last post on the growing need for API key management solutions, I received a number of email and DMs, which yielded two pretty interesting open source key management solutions, Vault and Keywhiz. I'm going to evaluate these solutions for their viability as a back-end for an API driven, API key management solution, but I have a lot more work to do. 

I'm also working with a partner of mine, SecureDB, and consider the possibility fo developing an encrypted API key management solution, which then would be accessible via their secure API. They are looking for some interesting solutions like this to be developed on their platform, so if you are a developer, and looking for a viable micro startup idea--let me know.

As with everything in my world, the concept of automating API keys using APIs, and managing of keys across API platforms, is a work in progress--stay tuned!


Adding An OAuth Scope Page As One Of My API Management Building Blocks

I've had a handful of suggested building blocks when it comes to authentication, as part of my API management research, but after taking a look at the OAuth Scopes page for the Slack API, I'm going to add another building block just for listing out OAuth scopes.

For platforms who provide OAuth, scopes are how access to users content and data is being broken down, and negotiated. When it comes to industry levels, OAuth scopes are how power and influence is being brokered, so I'm going to start tracking on how leading providers are defining their scopes--I am sure there are some healthy patterns that we all can follow here.

I have had the pleasure of sitting in on OAuth negotiations between major utility providers, as part of my work with the White House and Department of Energy in the past. This work has given me a glimpse into the future of how access and sharing of data will be negotiated in the future, with OAuth scopes and APIs playing a central role.

It will take me some time to standardize how I gather, store, and publish the OAuth scopes for each API, but I can get started by bookmarking any provider who shares their OAuth scopes, and encourage other API providers to do, by suggesting a formal OAuth scopes page as one possible building block you should consider when crafting your API strategy.


The Emerging Need For API Key Management Solutions

An API key management service targeting Drupal developers came across my radar this week. The service is very focused, in that it is a Drupal module, and is focused on helping Drupal developers manage the keys they use across a single app or installation, but I think it represents a potentially larger trend.

I think this particular solutions is just a symptom of a growing problem for developers of any type--how do you manage the number of keys you are depending on for you application(s). API consumers are in need of a plug and play way of to store, access, and manage the increasing number of API keys they put in use, otherwise we will be looking at a pretty serious security problem, adding to the existing security issues API providers and consumers face.

If you need evidence of the viability of API Key management solutions, AWS has one. Ok, why don't developer just use AWS? Well they should if it makes sense, but we also need other competing, and complimentary key management solutions, to ensure a healthy API space. Not all users are going to need full-blown IAM solutions, they just need a simple, encrypted place to put all their keys, and some utilities to help them manage them. 

Personally, I store my keys in a JSON config file stored in the private Github repo for any application I develop, and for each org, I have some crude API key management utilities to help me manage turnover. My server apps can cache the config file locally, and my client side apps run on Github Pages, using SSL, and Github OAuth to open up API key access they need.

I will be keeping an eye out for more API key management solutions, and begin the process of documenting the building blocks of these product and services, like I do with other areas. If you are looking to develop an API key management solution, feel free to reach out, as I have some feedback for your road-map along the way, and existing tools you could use to make it easier. 


Adding a Suggested API Definition for API Portals to My API Management Spec Collection

One layer I am working to add to my API research, are machine readable API definitions that I find, or generate from the APIs of the API service providers I keep an eye on. Within this new layer I'm aggregating these API the specs of the companies who are offering services within the emerging areas of the API sector.

The first area I've started aggregating is within API management. 3Scale was very forward leaning with their willingness to open up their API definitions for their own API management infrastructure APIs. These are API specs that describe all of the features of the 3Scale API management platform, and represent one possible blueprint that API management service providers could emulate.

I have four separate API definitions included from 3Scale on my new page, something that could be broken down further if it makes sense. I also just added a suggested API definition for API portals--crafted by the APIMATIC team. They thought my idea for defining a common set of definitions across the 17+ areas I monitor was an interesting cenough oncept, and was something they wanted to explore further with me.

Zeeshan Ejaz Bhatti of APIMATIC pulled together this spec:

 

 

This is just one potential starting point for providing a common interface for managing your API portal. Zeeshan feels that , if "API-Portals-as-a-Service", and other API management providers agreed on a common publish API format, it would benefit other API service providers like APIMATIC who are providing services to their customers via these API portals.

I agree with Zeeshan, and might play around adding an API facade for managing your API portal using Github + Jekyll, like I do with my standard API Portal template. This is all work in progress. Right now I am just aggregating the API definitions for API service providers that have APIs. In my opinion this layer of the API space will differentiate API service providers from each other, demonstrating which ones will actually scale in the fast growing the API economy.

Next I'm taking API monitoring services like Runscope and API Science, and aggregate their API definitions as part of my research.


Establishing A Common API Definition That API Management Providers Can Use

I mentioned the concept of what I call API building blocks coming to life by API service providers yesterday. These are the features provided from API service providers that are made accessible via APIs. Mind blowing right? API service providers having APIs? Which then allows API providers to programmatically manage the operations of their own APIs? Who would have ever thought!! Actually it is a pretty clear example of API service providers who are kind of full of it, when they do not offer their own APIs--meaning they are telling you about the importance of APIs, but not actually practicing what they preach. It is kind of like API providers who do not use their own APIs in their applications (dogfooding).

Anyhoo. I have done a lot of work to define the common building blocks across API service providers, spanning over 17 stops along the API lifecycle, and part of the next phase of my research is to connect these building blocks to actual API definitions that can help automate these vital API features. First up, I took the 3Scale API, and generated this list of common building blocks represented in the API for their API infrastructure.

Authorization

  • Authorize (App Id authentication pattern)(GET) - Read-only operation to authorize an application in the App Id authentication pattern
  • Authorize (API Key authentication pattern)(GET) - Read-only operation to authorize an application in the App Key authentication pattern
  • Authorize (OAuth authentication mode pattern)(GET) - Read-only operation to authorize an application in the OAuth authentication pattern

Reporting

  • Report (App Id authentication pattern)(POST) - Report the transactions
  • Report (API Key authentication pattern)(POST) - Report the transactions
  • Report (OAuth authentication pattern)(POST) - Report the transactions to

Authorization + Reporting

  • AuthRep (Authorize + Report for the App Id authentication pattern)(GET) - Authrep is a 'one-shot' operation to authorize an application and report the associated transaction at the same time
  • AuthRep (Authorize + Report for the API Key authentication pattern)(GET) - Authrep is a 'one-shot' operation to authorize an application and report the associated transaction at the same time
  • AuthRep (OAuth authentication mode pattern)(GET) - Authrep is a 'one-shot' operation to authorize an application and report the associated transaction at the same time in the OAuth authentication pattern

Account Management

  • Account Feature List(GET) - Returns the list of the features available to accounts
  • Account Feature Create(POST) - Create an account feature
  • Account Feature Read(GET) - Returns an account feature
  • Account Feature Update(PUT) - Updates an account feature
  • Account Feature Delete(DELETE) - Deletes an account feature
  • Account Plan Feature List(GET) - Returns the list of the features associated to an account plan
  • Account Plan Features Create(POST) - Associate an account feature to an account plan
  • Account Plan Features Delete(DELETE) - Deletes the association of an account feature to an account plan
  • Account Plan List(GET) - Returns the list of all available account plans
  • Account Plan Create(POST) - Creates an account plan
  • Account Plan Read(GET) - Returns the account plan by id
  • Account Plan Update(PUT) - Updates an account plan
  • Account Plan Delete(DELETE) - Deletes and account plan
  • Account Plan set to Default(PUT) - Set the account plan to be the default one
  • Account List(GET) - Returns the list of the buyer accounts (the accounts that consume your API)
  • Account Find(GET) - Find an account by the username or email of its users (username takes precendence over email)
  • Account Read(GET) - Returns a buyer account
  • Account Update(PUT) - Updates a buyer account by id
  • Account Delete (DELETE) - Deletes a buyer account
  • Account Change Plan(PUT) - Changes the account plan of the buyer account
  • Account Approve(PUT) - Approves the account (changes the state to live)
  • Account Reject(PUT) - Rejects the account (changes the state to rejected)
  • Account Reset to Pending(PUT) - Resets the state of the account to pending
  • Account Set Credit Card(PUT) - Associates credit card tokens and billing address to an account
  • Account Delete Credit Card(DELETE) - Removes all credit card info of an account
  • Account Message(POST) - Sends a message to the account
  • Account Read(GET) - Returns your account

Application Management

  • Application Plan Feature List(GET) - Returns the list of features of the application plan
  • Application Plan Feature Create(POST) - Associates a feature to an application plan
  • Application Plan Feature Delete(DELETE) - 
  • Limits List per Application Plan(GET) - Returns the list of all limits associated to an application plan
  • Limit List per Metric(GET) - Returns the list of all limits associated to a metric of an application plan
  • Limit Create(POST) - Adds a limit to a metric of an application plan
  • Limit Read(GET) - Returns a limit on a metric of an application plan
  • Limit Update(PUT) - Updates a limit on a metric of an application plan
  • Limit Delete(DELETE) - Deletes a limit on a metric of an application plan
  • Pricing Rules List per Metric(GET) - Returns the list of all pricing rules associated to a metric of an application plan
  • Pricing Rules List per Application Plan(GET) - Returns the list of all pricing rules associated to an application plan
  • Application Plan List (all services)(GET) - Returns the list of all application plans across services
  • Application Plan List(GET) - Returns the list of all application plans of a service
  • Application Plan Create(POST) - Creates an application plan
  • Application Plan Read(GET) - Returns and application plan
  • Application Plan Update(PUT) - Updates an application plan
  • Application Plan Delete(DELETE) - Deletes an application plan
  • Application Plan Set to Default(PUT) - Makes the application plan the default one
  • Application List (all services)(GET) - Returns the list of applications across all services
  • Application Find(GET) - Finds an application by keys used on the integration of your API and 3scale's Service Management API or by id (no need to know the account_id)
  • Account Fetch Account Plan(GET) - Returns the account plan associated to an account
  • Application Key List(GET) - Lists app keys of the application
  • Application key Create(POST) - Adds an key of an application (valid only on the authentication mode app_id/app_key or oauth)
  • Application key Delete(DELETE) - Deletes an key of an application (valid only on the authentication mode app_id/app_key or oauth)
  • Application Referrer Filter List(GET) - Lists referrer filters of the application
  • Application Referrer Filter Create(POST) - Adds a referrer filter to an application
  • Application Referrer Filter Delete(DELETE) - Deletes a referrer filter of an application
  • Application List(GET) - Returns the list of application of an account
  • Application Create(POST) - Create an application
  • Application Read(GET) - Returns the application by id
  • Application Update(PUT) - Updates an application
  • Application Change Plan(PUT) - Changes the application plan of an application
  • Application Create Plan Customization(PUT) - Creates a customized application plan for the application
  • Application Delete Plan Customization(PUT) - Deletes the customized application plan of the application
  • Application Accept(PUT) - Accepts an application (changes the state to live)
  • Application Suspend(PUT) - Suspends an application (changes the state to suspended)
  • Application Resume(PUT) - Resume a suspended application

User Management

  • User List(GET) - Returns the list of users of an account
  • User Create(POST) - Creates a new user of the account (account_id)
  • User Read(GET) - Returns the user of an account
  • User Update(PUT) - Updates the user of an account
  • User Delete(DELETE) - Deletes a user of an account
  • User change Role to Member(PUT) - Changes the role of the user to member
  • User change Role to Admin(PUT) - Changes the role of the user to admin
  • User Suspend(PUT) - Changes the state of the user to suspended
  • User Unsuspend(PUT) - Change the state of the user back to active
  • User Activate(PUT) - Activate the user of an account
  • Limit List for End User Plans (GET) - Returns the list of all limits associated to a metric of an end user plan
  • Limit Create for End User Plans(POST) - Adds a limit to a metric of an end user plan
  • Limit Read for End User Plans(GET) - Returns a limit on a metric of an end user plan
  • Limit Update for End User Plans(PUT) - Updates a limit on a metric of an end user plan
  • Limit Delete for End User Plans(DELETE) - Deletes a limit on a metric of an end user plan
  • End User Plan List (all services)(GET) - Returns the list of all end user plans across services
  • End User Plan List(GET) - Returns the list of all end user plans of a service
  • End User Plan Create(POST) - Creates an end user plan
  • End User Plan Read(GET) - Returns and end user plan
  • End User Plan Update(PUT) - Updates an end user plan
  • End User Plan set to Default(PUT) - Makes the end user plan the default one
  • End User Read(GET) - Returns the end user by id
  • End User Create(POST) - Create an end user
  • End User Delete(DELETE) - Deletes a end user
  • End User Change Plan(PUT) - Changes the end user plan of an end user
  • User List (provider account)(GET) - Lists the users of the provider account
  • User Create (provider account)(POST) - Creates a new user in the provider account
  • User Read (provider account)(GET) - Gets the user of the provider account by id
  • User Update (provider account)(PUT) - Modifies the user of the provider account by id
  • User Delete (provider account)(DELETE) - Deletes the user of the provider account by id
  • User change Role to Member (provider account)(PUT) - Changes the role of the user of the provider account to member
  • User change Role to Admin (provider account)(PUT) - Changes the role of the provider account to admin (full rights and privileges)
  • User Suspend (provider account)(PUT) - Changes the state of the user of the provider account to suspended, remove the user's ability to sign-in
  • User Unsuspend (of provider account)(PUT) - Revokes the suspension of a user of the provider account
  • User Activate (provider account)(PUT) - Changes the state of the user of the provider account to active, to be done after sign-up

Analytics

  • Method List(GET) - List the methods of a metric
  • Method Create(POST) - Creates a method under a metric
  • Method Read(GET) - Returns the method of a metric
  • Method Update(PUT) - Updates a method of a metric
  • Method Delete(DELETE) - Deletes the method of a metric
  • Metric List(GET) - Returns the list of metrics of a service
  • Metric Create(POST) - Creates a metric on a service
  • Metric Read(GET) - Returns the metric of a service
  • Metric Update(PUT) - Updates the metric of a service
  • Metric Delete(DELETE) - Deletes the metric of a service
  • Application Usage by Metric(GET) - Returns the usage data for a given metric (or method) of an application
  • Service Usage by Metric(GET) - Returns the usage data of a given metric (or method) of a service
  • Service Top Applications(GET) - Returns usage and application data for the top 10 most active applications of a service

Service Management

  • Service Feature List(GET) - Returns the list of all features of a service
  • Service Feature Create(POST) - Creates a feature on a service
  • Service Feature Read(GET) - Returns a feature of a service
  • Service Feature Update(PUT) - Updates a feature of a service
  • Service Feature Delete(DELETE) - Deletes a feature of a service
  • Service Plan Feature List(GET) - Returns the list of features of a service plan
  • Service Plan Feature Add(POST) - Associates an existing feature to a service plan
  • Service Plan List (all services)(GET) - Returns a list of all service plans for all services
  • Service Plan List(GET) - Returns a list of service plans for a service
  • Service Plan Create(POST) - Creates a new service plan in a service
  • Service Plan Read(GET) - Returns a service plan by id
  • Service Plan Update(PUT) - Updates a service plan by id
  • Service Plan Delete(DELETE) - Deletes a service plan by id
  • Service Plan set to Default(PUT) - Sets the service plan as default
  • Service List(GET) - Returns the list of all services
  • Service Create(POST) - Creates a new service
  • Service Read(GET) - Returns the service by id
  • Service Update(PUT) - Update the service
  • Signup Express(POST) - This request allows to reproduce a sign-up from a buyer in a single API call

Billing Management

  • Invoice List by Account(GET) - Returns the list of all invoices by account
  • Invoice by Account(GET) - Returns an invoice by id
  • Invoice List(GET) - Returns the list of all invoices
  • Invoice(GET) - Returns an invoice by id
  • Invoice(PUT) - Modifies the state of the invoice
  • Invoice Line Items List(GET) - Returns the list of all line items of an invoice
  • Invoice Payment Transactions List(GET) - Returns the list of all payment transactions of an invoice

Webhooks

  • Webhooks List Failed Deliveries(GET) - Lists of webhooks that could not be delivered to your end-point after 5 trials
  • Webhooks Delete Failed Deliveries(DELETE) - Deletes failed deliveries records

I've been using a subset of the 3Scale API management API definition as my standard blueprint for other API providers should follow, for a while now. All API providers should have an API for base API account management--meaning your API consumers should be able to manage their accounts, services, apps, and billing via an API. This will be a differentiation between API providers in the near future, and if you aren't up to speed, developers will be looking elsewhere.

This portion of my work is in response to a group of open source API management providers looking to encourage interoperability between their platforms, and what better way to do this than a common API management definition. While not all API management solutions will have exactly the same features, if they can speak a common, API defined language, the better off the entire API space will be.

This is something I want to encourage across all 17+ of the API service areas I track on. I'm going to take a look at API monitoring, and also try to generate a common outline from definition across some of the service providers I track on. I'm using API definitions to generate these outlines, and potentially merging across multiple API service providers. If you are one of the API service providers I track on, and have an API definition, make sure I have link so I can include in thi sportion of my research.


Thinking Through Some Of My Defensive API Management Tactics

As I add each API to my stack, I consider security along the way. I require an API key to access all of my APIs using my 3Scale API management infrastructure, and I also have different tiers of access, and while defining this management layer my first impulse is always put POST, PUT, and DELETE methods into the most trusted tiers.

The service composition layer in API management is where I feel the heart of API is--the thing that makes it different than SOA, and other approaches. This is where you can loosen things up, trust your 3rd party developers, and allow serendipity to occur. If you always default to locking things down, and only allowing the updating of resources by internal, or just trusted external sources, you are limiting the possibilities.

With this in mind I'm carefully evaluating a couple of defensive API management tactics I can employ:

  • Notifications - Make sure and send out an email or push notification to myself when POST, PUT, and DELETE are executed on certain APIs.
  • Key Locking - Allow for a certain amount of POST, PUT, and DELETE, but when you go over a certain volume, it locks your API key.

While these tactics won't prevent all bad situations, it can help me quickly identify them, and take action. My goal is to encourage people to develop on top of my APIs, and I'd rather focus on letting things flow, over locking everything down. There are certain security realities surrounding publicly available APIs, but honestly most people will never take the time to register for key, and learn a system just so they can do malicious things, and for the ones that will, I have some defensive things in place to trip them up.

There are plenty of tools built into my 3Scale API infrastructure that do this for me, what I am considering are some additional measures I can build into my own API, that employ the 3Scale API to better automate some of these defensive tactics. What are you doing to keep things safe, but also open? I'd love to hear other ideas for operating an API on the open Internet, in the safest, most sensible way.

Disclosure: 3Scale is an API Evangelist partner.


Looking At API Design, Deployment, And Management From A Form Point Of View

The concept of a form, is one one of those skeuomorphs, that have taken on an entirely new life on the Internet. The concept of a form is baked into HTML, PDFs, and many other commons aspects of our digital lives, while also still dominating many of the information exchanges in our physical worlds. There are a handful of APIs out there that let you build forms, and their are APIs that let you build forms for platforms like Drupal, but I have yet to see a platform that uses the concept of a form, as carrot to design, deploy, and manage your API--until now. 

I was introduced to Form.io at Gluecon this year, and was very please with the demo I was given (and my time playing with since). Form.io is a platform that enables developers to build web and mobile applications using a drag & drop interface which allows you to create both the forms and the RESTful APIs all at once. I like this concept, because forms is something the average business user will potentially understand, and is something that has helped them also better understand web, PDF, and other digital platforms--whcih tells me it might do the same for the world of APIs.

Form.io lets users craft their resources, from a forms point of view, allowing you to construct using common elements like email, password, address block, as well as being able to define the custom elements you will need. This is important because it is shifting API design to begin from the standpoint of how it will gathered or put to use, rather than just the system resource it came from like a database. This approach, in my opinion, has the potential to bring API design, closer to the people who are trying to solve everyday problems.

Once you have crafted your forms, and the underlying resources, Form.io gives you the ability to publish an application front-end as an AngularJS driven Single Page Application (SPA), which will be a whole other aspect that I will write about in coming weeks. Using Form.io, you end up with simple, embeddable forms you can publish anywhere, and a complete API that you can easily integrate with other systems, or web, and mobile applications. 

I am putting Form.io into the category of meaningful approaches to API design, deployment, and management for the masses, like APISpark for building APIs from common file and data sources, or Kimono for scraping web data, and publishing as simple API. I also feel it will move the API conversation forward significantly with mainstream business users like Blockspring is doing, because like spreadsheets, users just get forms. 

You will hear more about Form.io from me in coming months. The platform is in beta right now, but will be announcing a wider release soon. I am also helping them work through their thoughts about how they deliver their services in the cloud, on-premise, and their overall business model and service level agreements. I am eager to support services like Form.io because while they help move the API discussion forward, they do it in a way that supports the average business user, not just developers, and the API elite.

Disclosure: Restlet / APISpark is an API Evangelist partner.


SnapLogic and 3scale Announce Partnership to Bring API Management Capabilities to Integration Platform as a Service

SnapLogic, an industry leader in enterprise integration platform as a service (iPaaS), and 3scale, the leading API Management Platform, today announced a strategic partnership that aims at simplifying the development, publication and execution of any integration process. Under the terms of the partnership, the two companies have certified their respective platforms to interoperate seamlessly together, making it easy for application and data integration experts to expose SnapLogic’s Elastic Integration Platform dataflow pipelines as web APIs. These APIs, compliant with the REST architectural style, can be invoked by any authorized user, application, web backend or mobile app through a simple and standardized HTTP call in order to trigger the execution of the SnapLogic pipeline.

SnapLogic’s unified integration platform as a service (iPaaS) allows citizen integrators and developers to easily author multi-point integration pipelines that connect cloud and on-premise applications as well as disparate enterprise data sources for big data analytics and expose them as RESTful APIs. Once these pipelines are built, they can be released to developers through 3scale’s API Management Platform, accelerating the development and dissemination of public and private APIs.  

3scale’s distributed architecture and self-serve platform offer flexibility, performance and ability to scale. Powerful API access, policy and traffic controls make it simple to authenticate traffic, restrict by policy, protect backend services, impose rate limits and create access tiers. API documentation is friendly, interactive, intuitive, and clear with 3scale ActiveDocs, based on the Swagger Framework. Built-in analytics help API owners understand and control their traffic, identify the most active users, applications and methods and can help pinpoint traffic patterns.

“3scale shares our modern standards approach to application and data integration with a no-compromise, highly scalable API management platform,” said Jack Kudale, vice president of field operations at SnapLogic. “This partnership expands the enterprise integration possibilities for our joint solutions and allows us to deliver greater value to our customers.”  

About SnapLogic
SnapLogic is the industry’s first unified data and application integration platform as a service (iPaaS) that allows enterprise IT organizations and lines of business to connect faster and gain a better return on their cloud application and big data investments. SnapLogic’s modern architecture is powered by more than 300 Snaps, pre-built integration components that simplify and automate complex enterprise integration processes. Funded by leading venture investors, including Andreessen Horowitz and Ignition Partners, and co-founded by Gaurav Dhillon, co-founder and former CEO of Informatica, SnapLogic is run by prominent companies in the Global 2000. For more information call +1.888.494.1570 or visit www.snaplogic.com.

About 3scale
3scale is the leading self-serve, high performance API management platform, powering more than 600 customer APIs. API providers can easily package, distribute, manage and monetize APIs through a SaaS infrastructure that is powerful, flexible, secure and Web scalable. The 3scale platform enables the distribution of a company's data, content or services to multiple devices or mobile/Web applications, as well as the ability to easily productize APIs. Customers span the Fortune 500, government, academia, and startups. 3scale customers include Coldwell Banker, Johnson Controls, SITA, Crunchbase, Campbell’s Soup, UC Berkeley, Wine.com among others. The company also powers APItools for API consumers and APIs.io, the world’s first open source API search engine. For more information, visit http://www.3scale.net.


Reflecting On API Management And The Apigee IPO

It has been a little over three years since I published my first roundup of API management providers. I’ve been tracking on this new breed of companies long before I started API Evangelist, but in 2011 I started formalizing how I monitored what these companies were up to. In 2015, I now track on over 35 API management service providers, offering everything from simple proxies, to the full API management infrastructure stack you get from 3Scale.

I have met most of the API management providers, and after 5 years of covering them, it is no secret I have my favorites (you know who you are ;-). The business side of being an API management service provider has never really excited me, so I tend to stay away from the investment and acquisition stories, or speculating to much on who is winning the cash game. With that said, I think the Apigee IPO is a pretty significant milestone for the industry, something I have to give pause to, and reflect on how far we’ve come, and evaluate how an IPO compares to the acquisitions we’ve seen in recent years, or the other significant industry milestones.

The acquisitions of Mashery, Layer7, Vordel, and Apiphany showed the space was really maturing, and the recent name change by SOA Software to Akana, shows the space has evolved over the last ten years. I think the Apigee IPO shows the overall space is actually growing up. I don’t think Apigee had many other alternatives, with a gagillion dollars in funding, but I still think still it shows the space is moving out of its juvenile phase. I guess the actual IPO will be the true test of how grown up we all actually are eh?

For me, another thing to note is where 3Scale is at. You see, I consider Mashery, 3Scale, and Apigee to be the OG three of API management service providers. Yeah, I know you have been around longer--SOA, Vordel, and other gateway solutions, but those three are the "OG API management”. 3Scale has been plugging along slow and steady, taking on only the funding it needs, while Mashery was acquired, and now while Apigee is IPOing. I know I’m biased when it comes to 3Scale, but I think their longevity, and success is as notable as any IPO, or acquisition milestones--its been a long haul.

Another thing to think about is the amount of open source tooling that is available in 2015--I was pleased by the amount of new players when I did my last roundup. I was also happy to work with WSO2 early on to understand what the space needed with its open source API management solution, and also the API Umbrella platform as part of the federal government work I’ve done. The amount of open source tooling that is available is a clear sign for me, that the API management is truly growing up, and API management is really a thing (was a little shaky there for a while, couldn’t tell if I was dreaming, or awake ;-).

I would like to end this reflection, on the most important sign for me that the API space will continue its growth, and reach new heights in coming years. The fact that the conversation has moved way beyond just API management, with companies, services, and tooling emerging throughout the API life-cycle, stimulating design, deployment, discovery, integration, and management conversations that are generating the growth I speak of. Things were only about API management for a while, something that really worried me, but in 2015 the conversation goes much wider and deeper, pushing into even more exciting territory like visualizations, containerization, and objects.

In closing, congrats to Apigee, wishing you best of luck in your IPO--Its been a fun ride, and I’m looking froward to it not ending anytime soon.

Disclosure: 3Scale, and WSO2 are API Evangelist partners.


Adding Four New Building Building Blocks Providing An API Management API Blueprint

I am adding four new building blocks, to my list of suggested building blocks that API providers should consider when crafting their API management strategy. These four building blocks, are based upon several things I’ve seen in the space, and how some current deficiencies I’ve identified that could slow things, and hold us back.

These new building blocks are all about API providers practicing what they preach, and making the account for API consumers, and developers, programmatic through an API management API. As I was profiling the Maijet API as part of my email API research, I noticed they had an API for their developer accounts, and with the growth in the number of APIs being consumed, the need for account automation is only going to incease, something that is only getting even more critical when you think about the API service composition overhead needed to support the containerization and micro-services movement. To help stimulate this area, and encourage API providers to automate their API accounts, I’m adding these four separate APIs building blocks, providing an overall blueprint for how API providers can launch a developer account API for their own API.

I used Mailjet’s approach as a base model, but quickly began looking at 3Scale own API design. 1) I use 3Scale, so I’m familiar with their approach to user management 2) Their API was simple, robust, and clearly a proven approach that was working. I broke 3Scale’s own endpoints down into four logical groups for v1 of this API management API blueprint. There were a number of features they had, that I wanted in there, but figured it was better to start out simple, and meet most of the common needs I'm seeing from my own experiences.

Here is what I came up with, after taking a look at 3Scale’s active docs.

API Management API - User Management
Allow API API consumers to manage their own accounts via an API management API, enabling users to create, read, update, and delete information associated with their account--fields may vary, depending on what information each API requires for user accounts.

  • User Create
  • User Read
  • User Update
  • User Delete

API Management API - Account Management
Beyond the user profile, allow API consumers to manage the billing related account information, enable the programmatic adjustment of credit cards, and being able to check in on invoices, and payments through the API management API.

  • Account Set Credit Card
  • Account Delete Credit Card
  • Invoice By Account
  • Invoice Line Item List
  • Invoice Payment Transaction List

API Management API - Service Management
Enable API consumers to retrieve information about plans available for a specific API, or stack of APIs. Allow for listing of plans, and the features available for each plan. Also recommend considering the ability to set default plan for an account, enabling smooth application management.

  • Service Plan List
  • Service Plan Feature List
  • Service Plan Set To Default

API Management API - Application Management
Ideally each user can have multiple applications, consuming API resources at various rates. This allows for the most flexibility in API consumption, but may vary depending on what API management infrastructure employed. This API should allow for management of all applications, with secure control over application keys. Additionally, there should be analytics available, with a short, simple, but robust list of metrics.

  • Application Plan List (per isolated service)
  • Application List
  • Application Create
  • Application Read
  • Application Update
  • Application Change Plan
  • Application Key List
  • Application Key Create
  • Application Key Delete
  • Application Usage by Metric

This represents the v1 stack I’d like to see every API provider offer, as well for my own portable stack of API management APIs. I don't just want my core APIs to have these features, I want every wholesale API I deploy in other companies infrastructure to be equipped with the same basic stack. I can add in more features later, but I think this represents the minimum viable stack for automating API consumption at this point.

Next up, I’ll build a docker image for deploying this API management infrastructure, alongside other APIs as a single virtual stack of loosely coupled micro-services, when bundled with the service composition 3Scale affords me, it will give me an unprecedented ability to orchestrate my API infrastructure. Hopefully it can also provide a blueprint for others to use, and evolve beyond what I can do on my own.

Disclosure: 3Scale is an API Evangelist partner.


Making My 3Scale API Management Portable With A Containerized Micro-Services

As I work to find balance in this new micro-service, container driven world of APIs, 3Scale is continuing to be an essential layer to the business of my API / micro service orchestration. In alignment with what I’ve been preaching for the last 5 years, I'm needing to re-define my valuable API infrastructure resources to be as agile, flexible, and portable as I prescribe in the public stories that I tell.

Using 3Scale I have the service composition for my platform pretty well defined. The problem is, this definition is not as portable, and flexible as it could be, for the next shift in my API operations, via Docker, allowing me to manage loosely coupled stacks of micro-services. Within this new approach to API operations, I need every single Docker container I orchestrate to ping home, and understand where in my business service composition it exists and operates. This service composition allows me to offer the same service to everyone, or multitide of a services to specific or as diverse of a group of customers as I need. I don't want people to just use my APIs, I want them to use them, exactly where and how they want and need to.

I need this stack of API management infrastructure available for me within any domain, complete, or partial, on any containerized driven architecture I need--aka AWS, Google, Microsoft, Rackspace, etc.

Even if this infrastructure is just a proxy of the 3Scale infrastructure API, I need it portable, living as a proxy anywhere I need it. I also need to be able to transform (maybe APITools?) along the way. I need a moodular set of 3Scale API management infrastructure to design my own API infrastructure no matter wherever it lives in the cloud, or on-premise, even if it is in my closet on a Raspberry Pi, or wifi router for my home.

I can sign up new users to my API infrastructure, and allow them to access and consume any API resources I am serving up. I use this same infrastructure to build my applications, as well as make it accessible it to my consumers. At any point one of my consumers, needs to become a provider—I need their infrastructure to be portable, transferrable, and as flexible as it possibly can.

To accommodate the next wave of API growth I need my news API to be as flexible as possible for my needs, and I need to be able to also deploy it in any of my clients infrastructure, when I need. This enters a whole new realm of wholesale API deployment and management. I can now segment usage of my APIs beyond just my own management service composition, and through my containerized deployment infrastructure, I can add entirely new dimension to my architecture. The only thing I need is for my 3Scale infrastructure to be portable, containerized, and reusable—and since3Scae practices what it preaches, and has an API, I can move forward with my ideas.

All of this is doable for me because 3Scale has an API for their API infrastructure (*mind blown*), I just need to design the right Docker image to proxy the API(s), and extend it in as I need. As I do with all of my micro-services, I’m going to use Swagger to drive the functionality. I can take the Swagger definition for the entire stack of 3Scale API management resources, and include or omit each endpoint as I need—delivering exactly the API management stack I need

More to come on my 3Scale API management micro-service Docker image, as I roll it off the assembly line.


API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.


Messente API: Always Use A Backup DNS Solution

I found the DNS implementation over at the Messente SMS API interesting, and worth of sharing for deeper evaluation. I've been considering the various approaches by API providers when crafting their domains, or subdomains for API access heavily over the last couple weeks.

During some research time today I stumbled across the Messente SMS API which opts to provide two domains for making HTTP(S) requests of their API:

  • api2.messente.com
  • api3.messente.com

Messente provides a little disclaimer to handle the developer side of manual load-balancing these API calls:

These two domains have the same final destination regarding the API functions. In order to ensure that your requests always reach Messente API services, please use one of them as primary and the second one as backup. Both API domains work as equal, but in case of any unexpected downtime with one of them (HTTP 5xx), the other one must be used on client side.

I'm not sure this manual approach to providing API endpoints is the optimal path when delivering on the stability of your API, let alone the location of your resources, but it does provide an interesting contrast on the perspectives that are available out there in API-land.

Sometimes I feel like I should rebrand as API Anthropologist, as I find the approach of my fellow API providers more interesting than what I'd expect to find in a mature API landscape. This reflects the importance of showcasing what is going on, to help bridge to where we should be, rather than focusing exclusively on where we should be. (deep shit, man)


Moving Elasticsearch Into API Management With New API Security And Access Features

Elasticsearch, the open source, distributed, real-time search and analytics engine just announced that it is introducing a security layer on top of their API driven search platform. Historically you have to secure any APIs exposed via Elasticsearch through your own proxy or firewall solution, now with "Shield" you can natively manage your APIs directly in Elasticsearch.

Shield, in the same spirit of Marvel, is built on top of Elasticsearch public extensions points, and is easily installed as a plugin to add security features to any existing Elasticsearch installation. It does not require a different distribution of Elasticsearch, and relies heavily on the open public APIs Elasticsearch already exposes.

The security Elasticsearch is bringing to the table reflects the core features you see in the API space from API infrastructure providers like 3Scale--providing the basics of what you need to secure access to API endpoints:

  • Role-based Access Control - Set granular cluster, index, and alias-level permissions for each user of your Elasticsearch cluster. For example, allow the marketing department to freely search and analyze social media data with read-only permissions, while preventing access to sensitive financial data.
  • Authentication System Support - Shield integrates with LDAP-based authentication systems as well as Active Directory, so your users don’t need to remember yet another password. We also provide a native authentication system, for those who want to manage all access within Elasticsearch.
  • Encrypted Communications - Node-to-node encryption protects your data from intruders. With certificate-based SSL/TLS encryption and secure client communications with HTTPS, Shield keeps data traveling over the wire protected.
  • Audit Logging - Ensure compliance and keep a pulse on security-related activity happening in your Elasticsearch deployment; record login failures and attempts to access unauthorized information.

I've had Elasticsearch in the API deployment research project for some time now, but now I will add it to my API management research as well. If you can manage your API access, user roles, and generate log files for analytics from Elasticsearch API endpoints, the tool is moving squarely into the API management category.

I makes me happy to see open source tools like Elasticsearch improving their security features. Elasticsearch is something I recommend to government agencies to use when looking to open up access to document stores, using APIs. I would like to see more of the API management players working together to allow for interoperability between management platforms, but I’m guessing this is a wish I won’t get anytime soon.

Disclosure: 3Scale is an API Evangelist partner.


New API Management Providers: Clean, Modern API Portals With ReadMe

It makes me happy to see new arrivals in the world of API management service providers, especially after all the consolidation we saw last year with many of the 1st and 2nd wave of providers like Mashery, Vordel, Layer7, and Apiphany. One of the new API management providers that have emerged is Readme, who is looking to provide an attractive, simple, and intuitive way to launch developer portals for your APIs.

Readme reminds me of some of the landing page tools designed for web marketers over the last decade, to provide informative gateways for site visitors, but ReadMe is all about providing meaningful doorways to our API resources. First, I like ReadMe's definition of what is a developer hub?

  • Documentation - Topical guides, tutorials and troubleshooting.
  • API Reference - Low-level, deep-dive reference material.
  • Community - Provide support and answer questions.

Their definition of what is a developer hub is goes beyond just the technical documentation as what many developers see, and acknowledges that your portal should be about providing the resources necessary to educate API consumers, and potentially build community around their needs. ReadMe provides the essential building blocks I'd expect of any modern API portal builder:

  • Theme Builder - Easily create a beautiful dev community that matches your brand.
  • Editor - Markdown-based drag-and-drop editor makes documentation almost fun.
  • API Explorer - Let users play with your API right inside the documentation.
  • Application Keys - Your users can view their application keys embedded right in the docs.
  • Support - Let users ask questions and request features in the support forums.

Then ReadMe goes a little further to include some concepts that I see in some of the leading API providers, features that go beyond just basic features, allowing you to better manage your API portal:

  • Collaboration - Crowdsource your docs! Users can keep docs current by suggesting changes.
  • GitHub Sync - Keep auto-generated reference docs synced with your actual code changes.
  • Versioning - Maintaining old or testing beta versions of your docs is a breeze.

In 2014, emulating the social elements that Github has introduced into the world of coding, in your API program is essential. You cannot manage your entire API community by yourself, and including your developer in the process is essential. This adds relevant layers to the term "open" that everyone likes to use, providing the roots you will need to actually build trust with your developers, in something that goes both ways, and will also grow your own trust of developed within your own API community.

I'm keeping an eye on what ReadMe is up to, alongside the other API management providers I've been tracking on. I haven't give a lot of attention to the API management space in the last year, as I've been focusing on the faster growing areas like API discovery, design, and integration, but now that I see new players stepping up, I will make sure and give the area equal attention in my research and monitoring.


Taking A Fresh Look At What Open Source API Management Architecture Is Available

I’ve been a outspoken advocate for more open source API management tooling, for some time now. I'm sensitive to the fact that startups have to make money in the API space, but in my opinion a certain layer of the API space needs to remain open and interoperable for this all to work, and I feel that open source tools are an important variable in this equation. When I last checked into what was available in the space, there really wasn't much, so in 2014 I figured I'd take another look.

Originally there is just one player on the space who was completely open source, and that is WSO2:

API management is just one tool in the massive open source catalog that WSO2 brings to the table. The company provides a number of open source tools including identity & authentication, ESB, data and API management solutions.

WSO2 is the shining open source example in the space currently, and I was very happy to see them emerge on the landscape. I feel WSO2 plays a strong role in the API space, but their approach to open source is the enterprise version, where the open gets rid of software licensing cost (yay), but is targeted specifically at an enterprise audience. I’m not the enterprise, so to compliment what they bring to the table, I’m also looking for a different version of “community”, and eager to see smaller players step up as well.

After WSO2 I got a taste of open source from Alcatel Lucent, with their release of API Grove, which was a whole other definition of open source:

API Grove was open sourced late in 2012 by Alcatel Lucent, holding some promise that another strong enterprise open source player would step up, but within months it would be clear that it was not a live open source offering, but Alcatel abandoning their API program, and publishing as open source to get the press release.

I’d call API Grove, enterprise fire sale open source, and while the code is out there (I think), there is nobody home, not an enterprise, or other community to be seen. This type of open source releases just make me sad.

After WSO2 and API Grove, the only other player that saw emerg on the scene was ApiAxle:

ApiAxle is an proxy that sits on your network, in front of API(s)and and provides common API management features like rate limiting, authentication and caching. ApiAxle was recently purchase by application platform infrastructure provider Exicon.

ApiAxle has been on my list for a couple years now, and was recently purchased by Exicon. The site is pretty good looking, but I can't tell if there is any activity in the community recently. The blog is silent, but the site was updated a few days ago, and the Github repo for the project was last updated July 31st—it can be hard to tell sometimes, just how active an open source project is.

Beyond ApiAxle, there are two API management pioneers who been hard at work releasing open source tooling:

3Scale is one of the original API infrastructure providers, and has developed an open source API proxy using the NGINX Web Server, designed to work independently, but also be able to take advantage of freemium 3Scale API analytics, billing, etc.
Like 3Scale, Apigee is one of the original API infrastructure provider, and has been working on Volos an open source Node.js solution for developing and deploying production-level APIs, providing common features such as oAuth 2.0, caching, and quota management into APIs.

It can be hard to envision exactly how open source fits into 3Scale or Apigee’s business model, but as the core features of API management become more commoditized, I can't help but think that open source tooling will increasingly be a reality on the front-line for these API infrastructure providers.

After these projects from leading API management providers, I hadn't see any other open source API management tooling until I started to look closer at APIs in the public sector:

Socrata has been providing API and open data management tools to government of all shapes and sizes for some time now, and along the way is open sourcing many of its tools including an open source developer portal that anyone can use to management their API.
API Umbrella was born out of the National Renewable Energy Laboratory (NREL) to manage their own APIs, but then has evolved to be used by other agences, as well as being adopted by central data.gov efforts within the federal government. API Umbrella is a proxy that sits in front of your APIs. It can seamlessly add common functionality like api keys, rate limiting, and analytics to any API.

I think what is happening in the public sector API space, reflects what the private sector is needing as well, in regards to open, interoperability, and the room to play, experiment and figure things out before you have to start putting to much cash on the line. I just think the government is in more of a position to mandate this, where in the public sector freemium solutions from 3Scale have been a buffer for this demand, but as the space expands I think open source tooling will begin to evolve to provide further relief valves.

In doing this research, what has really caught my attention, is the number of new players that are picking up momentum in 2014:

Repose is an open-source platform that you can use to build stacks of reusable software components. These components can be leveraged by service developers to perform common API processing tasks. Repose can be used inside a service to perform API operations. It can also run on one or more separate servers as a proxy to one or more services. 
Tyk is an open source, lightweight, fast and scalable API gateway. Set rate limiting, request throttling, and auto-renewing request quotas to manage how your users access your API. Tyk supports access tokens, HMAC request signing, basic authentication and OAuth 2.0 to integrate old and new services easily.
Gluu provides open source authentication and API access management stack, called the Gluu Server, helps companies secure Web and mobile applications. The Gluu Server leverages standards such as OAuth2, OpenID Connect, UMA, SAML 2.0, and SCIM to enable federated single sign-on (SSO) and trust elevation.
Loopback is an open source API framework powered by Node for quickly creating APIs. Node is good for getting projects done quickly, leveraging the broad knowledge of JavaScript, and to scale to the concurrency that is well suited for web APIs.

Overall I think these new players reflect the maturation of the space, but beyond that I'm not sure what it means for the API world, it will take some time for me to work through what features are being offered, and what business models exist around these open tools. In my experience, the motivations behind open source mean everything, and the relationship a open source project has to their parent company can make or break the momentum any open source project will possess--setting the tone for any community, enteprise or otherwise.

Eventually I'll bring together the open source tools I've found in the API management layer, with the open source design tooling I'm seeing from Apiary and Swagger, and other API design providers, as well as the wealth of tooling I'm seeing for API integration—still playing catch up on the roundup here. When I squint my eyes, and think about the space, I can't help but feel that we are getting closer to my earlier vision of open source in the API space, but we still have a lot of work to do.

What else am I msising from the pool of open source API management tooling? I depend on my audience letting me know what they are using.

P.S. I know I'm going to get an email from Mulesoft on the piece, I’m sure you guys fit in here somewhere, but honestly I spent about 2 hours looking through your stuff, and I can’t figure out just exactly what version of open source you guys are. I understand the Mule ESB is open source, which isn't squarely in my API management category, and after that I really can't tell with the other open tooling you guys have—feel free to post a blog post response, helping me, and the community understand better.

Disclosure: 3Scale and WSO2 are both API Evangelist partners.



We are very pleased to announce the availability today of the IBM API Management Service – a public cloud SaaS offering of IBM API Management! This offering gives you the opportunity to use the power and flexibility of the IBM API Management product but in a hosted managed service run by IBM on your behalf, freeing you from the need to manage the infrastructure yourself and instead focus on the business services you want to expose. To enable you to get a feel for this new capability we invite you to sign up for a free 30 day trial through the IBM Cloud Marketplace or simply click the link below and then select “GET /started”; Benefits of the SaaS offering; We are really excited about this new stage in the evolution of IBM’s API Management solution, and we look forward to hearing your comments and experiences on platform, whether it be to carry out a pilot using the SaaS offering before you deploy our on-premise offering, or if you are looking to deploy your production workload on the cloud. New to API Management? API Management is a rapidly emerging space in the IT industry which gives you the tools and capabilities to rapidly define, secure, scale and manage APIs in order to provide access to business services in new channels like Mobile. In addition to the technical ability to expose an API it also importantly provides the ability to socialize your API with target communities of application developers, giving them the opportunity to learn about your API and sign up to use them. Finally API Management solutions provide analytics tools for assessing how successfully your APIs are being used so that the business owners can measure the adoption of the APIs. For more information, please see the following Introduction to IBM API Management and related links.

URL: https://developer.ibm.com/api/2014/09/26/apimsaas_has_arrived/


In The Future APIs Will Be Default For All Cities

In 2014 we are making significant progress in deploying APIs in support of city operations, but we still have so much more work ahead of us when it comes to making public resources available. You can find a dedicated developer area full of data sets, and APIs, in most major US cities like New York, Chicago, San Francisco, Seattle, Philadelphia, Washington D.C. and many more, but what else can we do to really pick up the momentum and quality?

Standardizing API Design Practices
APIs are not that difficult to design with the right education, and experience. Developers who work on city contracts, or are employed by the city should all be taught common web API design practices, and be exposed to modern API design tooling like Swagger and Apiary. Even with this type of education, there will still be many differences between city deployments based upon needs, and tactics, but a little training could go a long way to make city operations more streamlined.

Open Solutions For API Deployment
There are a lot of common approaches to delivering city services, which means there should also a be a number of ways to provide standardized, open solutions for deploying APIs that support city operations. There should be a wealth of open source, Wordpress like solutions for deploying APIs in support of government operations. Sometimes connecting to legacy systems is just to much work, and deploying a simple, standalone solution, then syncing using data dumps or directly with backbend system might be more fruitful.

Common API Management Vision
I’m pretty impressed with the standard approach to deploying city developer areas, and delivering data sets, and APIs, but in reality this is the result of the hard work of Socrata, one of the API management providers dedicated to the government space. I think Socrata, and the other vendors out there are definitely one piece of the puzzle for managing APIs for city operations, but I also think we need other competing, open solutions similar to API Umbrella which is being used across the federal government.

Open Source Tooling Across Cities
When it comes to helping cities better serve their citizens, and save money along the way, I can’t think of a better place to start than by providing common, open source tools for delivering web, and mobile applications on top of city data and APIs. We have to stop re-inventing the wheel for each city when it comes to developing common apps, city needs are going to be very similar across cities--just take a look at solutions like Open311, and the let’s get to work on delivering similiar solutions for every part of city operations.

There is no reason each city should have to go at it alone when it comes to designing, deploying, managing, evangelizing, and putting APIs to work across city operations. We should have standard data models, API definitions, and a wealth of open source tools for cities to put to work.

I don't see APIs as the solution for all cities problems, but I do think that APIs should be common practice for ALL cities. Every city should be publishing all of their data and content in machine readable way, without causing employees any extra work—it should just be part of normal operations.

In the future, all cities will have standard APIs across all cities, and common open source solutions that can be put to work serving citizens in all aspects of city operations. This is how we are going to empower our cities to more with less, and make governing a more inclusive for everyone.


Explaining APIs To Senior Leadership: Access To Company Resources Without The IT Hassle

One question I get pretty frequently from my readers, is about how they should explain APIs to their senior leaders, specifically the non-tech savvy executives. In my opinion, these conversations can be some of the most important ones, not just for a single company, but potentially an entire industry. To help support this effort, I’m working through several stories that anyone can put to work when trying to convince their senior leaders that APIs are a thing--this week is about access to resources.

APIs are all about making vital company resources available in a self-service, and secure way over the open Internet. Despite popular belief, most APIs are not publicly available, just the overview, documentation, code samples, and other building blocks are publicly available. If a developer actually wants to use an API they have to register, and be given access to API endpoints, before they can do anything with them. There are some pretty proven approaches to API management out there, which include a centrally located developer center, simple API registration form, and potentially multiple service levels, that all help manage how APIs are accessed and put to use.

API resources can be anything from a company directory, to details of specific projects, or possible access to company compute, storage and other common IT resources. Anything you do on your computer at work, or have published via your website, can be made available via APIs, and easily accessed through a public or private portal, in a 24/7, self-service manner--all without needing to make a request for IT resources. This type of efficiency is what every company needs to be competitive in coming years, ensuring that every employee has access to the resources they need to get their job done.

Access to corporate or organization resources via APIs doesn’t have to be something just for programmers. It is pretty likely that you already have web services at your company, but these APIs have been designed just for IT and geeks, where modern APIs come with supporting building blocks like widgets, spreadsheet connectors, and other tools that make API resources accessible and usable by anyone. This is an important distinction, one that is democratizing vital resources, and putting them into the hands of people who can benefit the most, and not restricted by classic IT bottlenecks.

IT, and developers will often say APIs aren’t for the average business person, and many business folks are used to this type of rhetoric and have been trained to avoid anything API related, feeling this isn’t for them—something tech folks like, because it keeps them in control, feeding into classic IT power structures. However, for the last 14 years, web APIs have been making vital resources available to internal and external developers who are building web and mobile applications, as well as the internal power user, and are something every company should consider, when looking to help everyone in a company do their job better.


6,482 Datasets Available Across 22 Federal Agencies In Data.json Files

It has been a few months since I ran any of my federal government data.json harvesting, so I picked back up my work, and will be doing more work around datasets that federal agnecies have been making available, and telling the stories across my network.

I'm still surprised at how many people are unaware that 22 of the top federal agencies have data inventories of their public data assets, available in the root of their domain as a data.json file. This means you can go to many example.gov/data.json and there is a machine readable list of that agencies current inventory of public datasets.

I currently know of 22 federal agencies who have published data.json files:

Consumer Financial Protection Bureau
Department of Agriculture (USDA)
Department of Defense (DOD)
Department of Energy (DOE)
Department of Justice (DOJ)
Department of State
Department of the Treasury
Department of Transportation (DOT)
Department of Veterans Affairs (VA)
Environmental Protection Agency (EPA)
General Services Administration (GSA)
Institute of Museum and Library Services (IMLS)
Millennium Challenge Corporation (MCC)
National Aeronautics and Space Administration (NASA)
National Archives and Records Administration (NARA)
National Institute of Standards and Technology (NIST)
National Science Foundation (NSF)
National Transportation Safety Board (NTSB)
Nuclear Regulatory Commission (NRC)
Office of Personnel Management (OPM)
Social Security Administration (SSA)
United States Agency for International Development (USAID)

You can click on the logo or name, and view the full data.json files. You can also visit my Federal Agency Dataset Adoption work to see all of the datasets listed for each agency. There is stil one bug I notice in the adoption process, so don't adopt anything quite yet.

The goal of this just to highlight again, that there is a wealth of open data resources just waiting for all of us open gov hackers to take advantage of, and work make sense of. Federal agencies need our help, so get involved, there is a lot of work to be done.


What I Have Been Calling API Trends, Are Slowly Being Baked Into API Operations

In my monitoring of the API space, when I started seeing a large number of blog posts, tweets, companies, and other elements I track on get tagged with the same tag over and over, I take notice. My blogging, CRM, and news curation system all have their own tag cloud interface for the week, showing which tags have been applied--so if a tag gets heavy usage, I know it.

Over the last couple of years, I've spun up new research into other areas within the world of APIs, beyond my core design, deployment, management, evangelism, discovery, and integration research. I created separate buckets beyond just provide and consume to track on these new areas, called trends, opportunities, and priorities.

In 2014 it is beginning to seem like each of my trend research areas are getting baked directly into API platforms, ranging from real-time features with Firebase, to reciprocity by default using Zapier. API providers are learning that having a real-time layer, or a reciprocity layer baked into their platform is a good thing, and why reinvent the wheel when you have kick ass solutions like Firebase and Zapier.

It makes sense that API providers would be looking externally to deliver aggregation, real-time, reciprocity, and even voice layers for their API platforms--this stuff is hard, and why spread yourself too thin. Intuit just bought reciprocity provider itDuzzit, and I think we will more providers integrating Zapier into their platforms by default like Nimble did. We'll also see more API platforms bring in Firebase as a real-time layer like Nest did for their Internet of Things (Iot) thermostat API platform.

Overall it seems like a white label solution that any API provider could put to use when considering solutions for aggregation, real-time, reciprocity, voice or even data solutions including spreadsheet connectors, analysis, and visualization, would do well in the space. At the very least, any company looking to step up and provide solutions in these areas, should definitely have a strong partner program like Zapier and Firebase have brought to the table.

I will have to start considering how to migrate aggregation, real-time, reciprocity, voice out of the trends bucket and into either the provide or consume buckets, or maybe both. It would seem that both API providers and consumers need to be educated in these areas, and made aware of what solutions are available.

I’m not that worried about the overall structure of API Evangelist at the moment. That is one of the beautiful aspects of how I architected the site(s), is that each research area lives as its own node on the network, so I can move around, shift as I need to find the right formula—something that helps me in a very fast moving space, where my understanding is constantly shifting and evolving with the swift currents of the APi space.

Photo Credit: Diego Naive


Route SMS Messages To Google Spreadsheets Via Twilio API With TwilioSheet

If you follow Twilio blog or Twitter account you can always find a good API story from the API leader. It also makes me happy to see trends I’m seeing from other provider re-enforced by the API heavyweight. This time is providing spreadsheet integration with common API resources, like Twilio SMS.

Twilio has a pretty slick tool they call TwilioSheet that allows you to receive SMS messages in a Google Spreadsheet, and created a pretty nice walkthrough of the entire setup. Providing this type of functionality helps, as Twilio says, "make it easy for developers and non-developers alike to receive SMS messages in a Google Spreadsheet”—emphasis on the non-developers.

Whether we like it or not, the spreadsheet is the #1 database solution in the world, and provide a huge opportunity when it comes to bridging the world of APIs with the wider business landscape. This is something that API reciprocity providers like Zapier have bridging for a while now, and something that API providers like Intuit are looking to bank into their API platforms.

When you see Twilio doing something like providing providing spreadsheet integration for their API platform, you have to stop and consider whether or not it is something that might work for your own API platform. Spreadsheet integration by default with API your driven resources is a great way to expand the reach of any API, bringing these valuable resources within reach of the actual, everyday problem owners.


Every API Provider Should Have A Logo And Branding Page

I spend a lot of time looking for good quality logos to represent the companies I track on and write stories about. I have a certain vision in my head about how I want company listings and detail pages to look across the API Evangelist network—something that takes a lot of work.

To support this vision, I spend a lot of time looking for logos. Sometimes you can find them in the header of a website, but often times they are poor quality, not configured to be standalone, or difficult to get at for any number of other reasons--making my work a lot tougher.

I published a new list of 819 companies who are doing interesting things with APIs that I call The API Stack. After publishing the project, Concur, one of the travel API providers listed, tweeted at me asking if I could replace their logo with a better quality one from their official logo page--I did so very quickly!

I sure love me a good logo and branding page for an API. It makes my life easier, by giving me a single place to go get high quality logos, without having to open up my image editor. I wish that more API platforms would have a well designed logo page like Concur.


Never Looking Out The Window, Let Alone Trusting Anyone External Of The Department of Veteran Affairs

I haven't written much about my experience last summer as a Presidential Innovation Fellow (PIF) at the Department of Veteran Affairs (VA). I have lots of thoughts about experience at the VA, as well as participating in the PIF program, and I choose to trickle these thoughts out, as I continue to make sense of them, and bring them into alignment with my overall mission as the API Evangelist.

I was given three projects when I started work at the VA: 1) Inventory data assets 2) Inventory web services 3) Move forward D2D, a forms web service that would allow VA hospitals and Veteran Service Organizations (VSOs) to submit forms through the claims process on behalf of veteran.

The most prevalent illness I witnessed across these three efforts was a unwillingness to trust outside groups (even VSOs and hospitals), and a lack of desire to share data and resources to anyone outside of the VA (ironically except contractors), to the point where groups seem to take defensive positions around what they did on behalf of our veterans. This culture makes for some pretty toxic environments, I personally feel contributing to much of the problems we’ve seen bubble up into the public media space of late.

While work at the VA you constantly hear about the VA claims backlog, and how we need to optimize, but when you bring up sharing data, or resources to other federal agencies, trusted external partners like hospitals, and VSO’s you get pushback with concerns of security, personally identifiable information (PII), etc. All which are valid claims, but there are proven ways to mitigate these risks through Identify and Access Management (IAM), which is another whole post in itself. You start feeling crazy when you get pushback for arguing that a doctor should be able to submit disability questionnaires via an iPad application, that uses an existing VA API, in a way that securely authenticates the doctor.

As a result of other system, cultural issues, and mistakes made in the past, VA employees and contractors are extremely adverse to opening up to the outside world, even if it can help. I kept hearing references to the 2006 data breach as a reason to keep systems locked down, where an employee brought a laptop home, affecting 26M individuals. This horror story, plus a variety of other cultural issues are keeping VA staff from accepting any new way of thinking, even if it could help reduce their workload, improve the claims process, and better serve the veterans and their families.

This is a pretty fundamental flaw in how large government agencies operate, that are in conflict with the solutions API can bring to the table. I don’t give a shit how well designed your API is, in this environment you will fail. Period. I do not think I will ever fully understand what I saw at the VA, while a PIF in Washington DC, but I feel like I’m finally reaching a point where I can at least talk about things publicly, put my thoughts out there, and begin my experiences as a PIF at the VA into my overall API Evangelist message.


Providing Faculty Access To Campus APIs

University faculty and administrators are increasingly depending on technology to do their job. As institutions continue to require staff to use common systems like the Learning Management System (LMS), and the Student Information System (SIS), contact, content, document, media, and a variety of other systems, they need to also understand the importance ensuring that all systems enable feature, setting, content and data portability that can be introduced by using APIs.

APIs could be as simple as allowing a teacher to custom pull a student roster for a course in a different way than the student information system will allow, or using APIs could be part of a larger research project, allowing a distributed group of researchers to work on a single document or even database via APIs. APIs are often right below the surface on many of the systems faculty and administrators are already depending on for their job, and it would not take much to make these existing APIs available in a way that allows staff to use across multiple channels including the web, mobile, single page applications, spreadsheets, as well as across external data analysis and visualization tools.


The New StrongLoop API Server Provides A Look At Future Of API Deployment

I’m looking through the most recent API server release from StrongLoop, and I can’t help but see echoes of what I’ve been researching, and covering across the API Evangelist networkAPI management has been front and center for years, but API deployment is something that is just now being productized, with a wealth of new service providers emerging to provide API deployment solutions that go beyond DIY frameworks, and enterprise API gateways.

Let start with walking through their announcement of their StrongLoop API Server:

  • LoopBack 2.0 - An open source framework for quickly creating APIs with Node, including the client SDKs.
  • mobile Backend-as-a-Service - An mBaaS to provide mobile services like push, offline-sync, geopoint and social login either on-premise or in the cloud.
  • Connectors - Connectivity for Node apps leveraging over ten supported data sources including Oracle, SQL Server, MongoDB and SOAP.
  • Controller - Automated DevOps for Node apps including profiling, clustering, process management and log management capabilities.
  • Monitoring - A hosted or on-premise graphical console for monitoring resource utilization, response times and function tracing with the ability to send metrics to existing monitoring tools.

Just as StrongLoop did in their release post, let’s dive deeper into LoopBack 2.0, the open source core of StrongLoop, which they say "acts as a glue between apps or devices and data via APIs written in Node”:

  • Studio - A graphical interface to complement the command-line tooling and assist developers in building Loopback models.
  • Yeoman and Grunt - The ability to script tasks, scaffold, and template applications and externalize their configurations for multiple environments.
  • ExpressJS 4.0 - The latest update, for the well known Node.js package, bringing improvements by removing bundled middleware and refactoring them into maintainable modules, revamped router to remove confusion on HTTP verb usage and decoupling Connect, the HTTP framework of Node from the Express web framework. It is also the E in the MEAN stack (MongoDB, ExpressJS, AngularJS, Node.js).
  • Project Structure - An expanded directory structure has been expanded to make it easier to organize apps and add functionality via pre-built LoopBack components and Node modules.
  • Workspace API - Internal API making it easier to define, configure, and bootstrap your application at design time and runtime by simply defining metadata in the form of JSON.

This is one of the few sophisticated, next generation, API deployment frameworks I have seen. We have had gateways for a while, and we have a new breed of database and spreadsheet to API providers like APISpark. We also have a new wave of scraping to API solutions from Kimono Labs and Import.io, but I’d say Orchestrate.io gets us closest to the vision I have for StrongLoop, when it comes to API deployment.

I’ve referenced this ability in my stories on virtual API stacks:

This new approach to API deployment allows us to rapidly define, deploy, and orchestrate stacks of API resources for use in our web, single page, and mobile applications. I really feel like BaaS, as an entire company, was just a short growth phase, that leading us to this point, where anyone can quickly deploy their own BaaS, for any broad, or niche purpose. I also see my research into the world of APIs and Single Page Apps (SPAs) reflected here, in StrongLoops API platform vision.

I feel that StrongLoop has an important take on API deployment, one that reflects where leading API, web, single page, and mobile app developers have been for a while now. The difference is that StrongLoop is providing as a standardized platform, allowing developers to much more elegantly orchestrate their entire lifecycle. You have everything you need to connect to existing resources, generate new API resources, and organize work into reusable parts, to deliver the web, single page, mobile apps you need.

I am closely watching this new generation of API deployment providers, companies like StrongLoop, Orchestrate, Flynn, and Cosmic. I see these players being the next generation API gateway, that goes way beyond just providing an enterprise gateway to internal assets. This newer vision is much more directly aligned with the needs of developers, enabling them to rapidly design, deploy and manage the API services they need to drive the web, single page, and mobile apps that are the vehicles in the API economy.


Low Hanging Fruit For API Discovery In The Federal Government

I looked through 77 of the developer areas for federal agencies, resulting in reviewing approximately 190 APIs. While the presentation of 95% of the federal government developer portals are crap, it makes me happy that about 120 of the 190 APIs (over 60%) are actually consumable web APIs, that didn't make me hold my nose and run out of the API area. 

Of the 190, only 13 actually made me happy for one reason or another:

Don't get me wrong, there are other nice implementations in there. I like the simplicity and consistency in APIs coming out of GSA, SBA, but overall federal APIs reflect what I see a lot in the private sector, some developer making a decent API, but their follow-through and launch severeley lacks what it takes to make the API successful. People wonder why nobody uses their APIs? hmmmmm....

A little minimalist simplicity in a developer portal, simple explanation of what an API does, interactive documentation w/ Swagger, code libraries, terms of service (TOS), wouild go a looooooooooooong way in making sure these government resources were found, and put to use. 

Ok, so where the hell do I start? Let's look through theses 123 APIs and see where the real low hanging fruit for demonstrating the potential of APIs.json, when it comes to API discovery in the federal government.

Let's start again with the White House (http://www.whitehouse.gov/developers):

Only one API made it out of the USDA:

Department of Commerce (http://www.commerce.gov/developer):

  • Census Bureau API - http://www.census.gov/developers/ - Yes, a real developer area with supporting building blocks. (Update, News,( App Gallery, Forum, Mailing List). Really could use interactive document though. There are urls, but not active calls. Would be way easier if you could play with data, before committing. (B)
  • Severe Weather Data Inventory - http://www.ncdc.noaa.gov/swdiws/ - Fairly basic interface, wouldn’t take much to turn into modern web API. Right now its just a text file, with a spec style documentation explaining what to do. Looks high value. (B)
  • National Climatic Data Center Climate Data Online Web Services - http://www.ncdc.noaa.gov/cdo-web/webservices/v2Oh yeah, now we are talking. That is an API. No interactive docs, but nice clean ones, and would be some work, but could be done. (A)
  • Environmental Research Division's Data Access Program - http://coastwatch.pfeg.noaa.gov/erddap/rest.html - Looks like a decent web API. Wouldn’t be too much to generate a machine readable definition and make into a better API area. (B)
  • Space Physics Interactive Data Resource Web Services - http://spidr.ngdc.noaa.gov/spidr/docs/SPIDR.REST.WSGuide.en.pdf - Well its a PDF, but looks like a decent web API. It would be some work but could turn into a decide API with Swagger specs. (B)
  • Center for Operational Oceanographic Products and Services - http://tidesandcurrents.noaa.gov/api/ - Fairly straightforward API, Simple. Wouldn’t be hard to generate interactive docs for it. Spec needed. (B)

Arlington Cemetary:

Department of Education:

  • Department of Education - http://www.ed.gov/developers - Lots of high value datasets. Says API, but is JSON file. Wouldn’t be hard to generate APIs for it all and make machine readable definitions. (B)

Energy:

  • Energy Information Administration - http://www.eia.gov/developer/ - Nice web API, simple clean presentation. Needs interactive docs. (B)
  • National Renewable Energy Laboratory - http://developer.nrel.gov/ - Close to a modern Developer area with web APIs. Uses standardized access (umbrella). Some of them have Swagger specs, the rest would be easy to create. (A)
  • Office of Scientific and Technical Information - http://www.osti.gov/XMLServices - Interfaces are pretty well designed, and Swagger specs would be straightforward. But docs are all PDF currently. (B)

Department of Health and Human Services (http://www.hhs.gov/developer):

Food and Drug Administration (http://open.fda.gov):

Department of Homeland Security (http://www.dhs.gov/developer):

Two losse cannons:

 Department of Interior (http://www.doi.gov/developer):

Department of Justice (http://www.justice.gov/developer):

Labor:

  • Department of Labor - http://developer.dol.gov/ - I love their developer area. They have a great API, easy to generate API definitions. (A)
  • Bureau of Labor Statistics - http://www.bls.gov/developers/ - Web APIs in there. Complex, and lots of work, but can be done. API Definitions Needed. (B)

Department of State (http://www.state.gov/developer):

Department of Transportation (http://www.dot.gov/developer):

Department of the Treasury (http://www.treasury.gov/developer):

Veterans Affairs (http://www.va.gov/developer):

Consumer Finance Protectection Bureau:

Federal Communications Commission (http://www.fcc.gov/developers):

Lone bank:

  • Federal Reserve Bank of St. Louis - http://api.stlouisfed.org/ - Good API and area, would be easy to generate API definitions. (B)

General Services Administration (http://www.gsa.gov/developers/):

National Aeronautics and Space Administration http://open.nasa.gov/developer:

Couple more loose cannons:

Recovery Accountability and Transparency Board (http://www.recovery.gov/arra/FAQ/Developer/Pages/default.aspx):

Small Business Administration (http://www.sba.gov/about-sba/sba_performance/sba_data_store/web_service_api):

Last but not least.

That is a lot of potentially valuable API resource to consume. From my perspective, I think that what has come out of GSA, SBA, and White House Petition API, represent probably the simplest, most consistent, and high value targets for me. Next maybe the wealth of APis out of Interior and FDA. AFter that I'll cherry pick from the list, and see which are easiest. 

I'm lookig to create a Swagger definition for each these APIs, and publish as a Github repository, allowing people to play with the API. If I have to, I'll create a proxy for each one, because CORS is not common across the federal government. I'm hoping to not spend to much time on proxies, because once I get in there I always want to improve the interface, and evolve a facade for each API, and I don't have that much time on my hands.


My Response To How Can the Department of Education Increase Innovation, Transparency and Access to Data?

I spent considerable time going through the Department of Education RFI, answering each question in as much detail as I possibly could. You can find my full response below. In the end I felt I could provide more value by summarizing my response, eliminating much of the redundancy across different sections of the RFI, and just cut through the bureaucracy as I (and APIs) prefer to do.

Open Data By Default
All publicly available data at the Department of Education needs to be open by default. This is not just a mandate, this is a way of life. There is no data that is available on any Department of Education websites that should not be available for data download. Open data downloads are not separate from existing website efforts at Department of Education, they are the other side of the coin, making the same content and data available in machine readable formats, rather than available via HTML—allowing valuable resources to be used in systems and applications outside of the department’s control.

Open API When There Are Resources
The answer to whether or not the Department of Education should provide APIs is the same as whether or not the agency should deploy websites—YES! Not all individuals and companies will have the resources to download, process, and put downloadable resources to use. In these situations APIs can provide much easier access to open data resources, and when open data resources are exposed as APIs it opens up access to a much wider audience, even non-developers. Lightweight, simple, API access to open data inventory should be default along with data downloads when resources are available. This approach to APIs by default, will act as the training ground for not just 3rd party developers, but also internally, allowing Department of Education staff to learn how to manage APIs in a safe, read-only environment.

Using A Modern API Design, Deployment, and Management Approach
As the usage of the Internet matured in 2000, many leading technology providers like SalesForce and Amazon began using web APIs to make digital assets available to 3rd party partners, and 14 years later there are some very proven approaches to designing, deploying and management APIs. API management is not a new and bleeding edge approach to making assets available in the private sector, there are numerous API tools and services available, and this has begun to extend to the government sector with tools like API Umbrella from NREL, being employed by api.data.gov and other agencies, as well as other tools and services being delivered by 18F from GSA. There are many proven blueprints for the Department of Education to follow when embarking on a complete API strategy across the agency, allowing innovation to occur around specific open data, and other program initiatives, in a safe, proven way.

Use API Service Composition For Maximum Access & Control
One benefit of 14 years of evolution around API design, deployment, and management is the establishment of sophisticated service composition of API resources. Service composition refers to the granular, modular design and deployment of APIs, while being able to manage who has access to these resources. Modern API access is not just direct, public access to a database. API service composition allows for designing exactly the access to resources that is necessary, one that is in alignment with business objectives, while protecting the privacy and security of everyone involved. Additionally service composition allows for real-time awareness of how all data, content, and other resources at the Department of Education are accessed and put to use, allowing new APIs to be designed to support specific needs, and existing APIs to evolved based upon actual demand, not just speculation.

Deeper Understanding Of How Resources Are Used
A modern API service composition layer opens up possibility for a new analytics layer that is not just about measuring and reporting of access to APIs, it is about understanding precisely how resources are accessed in real-time, allowing API design, deployment and management processes to be adjusted in a more rapid and iterative way, that contributes to the roadmap, while providing the maximum enforcement of security and privacy of everyone involved. When the Department of Education internalizes a healthy, agency-wide API approach, a new real-time understanding will replace this very RFI centered process that we are participating in, allowing for a new agility, with more control and flexibility than current approaches. A RFI cycle takes months, and will contain a great deal of speculation about what would be, where API access, coupled with healthy analytics and feedback loops, answers all the questions being addressed in this RFI, in real-time, reducing resource costs, and wasted cycles.

APIs Open Up Synchronous and Asynchronous Communication Channels
Open data downloads represents a broadcast approach to making Department of Education content, data and other resources available, representing a one way street. APIs provide a two-way communication, bringing external partners and vendors closer to Department of Education, while opening up feedback loops with the Department of Education, reducing the distance between the agency and its private sector partners—potentially bringing valuable services closer to students, parents and the companies or institutions that serve them. Feedback loops are much wider currently at the Department of Education occur on annual, monthly and at the speed of email or phone calls , with the closest being in person at events, something that can be a very expensive endeavor. Web APIs provide a real-time, synchronous and asynchronous communication layer that will improve the quality of service between Department of Education and the public, for a much lower cost than traditional approaches.

Building External Ecosystem of Partners
The availability of high value API resources, coupled with a modern approach to API design, deployment and management, an ecosystem of trusted partners can be established, allowing the Department of Education to share the workload with an external partner ecosystem. API service composition allows the agency to open up access to resources to only the partners who have proven they will respect the privacy and security of resources, and be dedicated to augmenting and helping extend the mission of the Department of Education. As referenced in the RFI, think about the ecosystem established by the IRS modernized e-file system, and how the H&R Blocks, and Jackson Hewitt’s of the world help the IRS share the burden of the country's tax system. Where is the trusted ecosystem for the Department of Education? The IRS ecosystem has been in development for over 25 years, something the Department of Education has to get to work on theirs now.

Security Fits In With Existing Website Security Practices
One of the greatest benefits of web APIs is that they utilize existing web technologies that are employed to deploy and manage websites. You don’t need additional security approaches to manage APIs beyond existing websites. Modern web APIs are built on HTTP, just like websites, and security can be addressed right alongside current website security practices—instead of delivering HTML, APIs are delivering JSON and XML. APIs even go further, and by using modern API service composition practices, the Department of Education gains an added layer of security and control, which introduces granular levels of access to all resource, something that does not exist for website. With a sensible analytics layer, API security isn’t just about locking down, it is about understanding who is access resources, how they are using them, striking a balance between the security and access of resources, which is the hallmark of APIs.

oAuth Gives Identity and Access Control To The Student
Beyond basic web security, and the heightened level of control modern API management deliver, there is a 3rd layer to the security and privacy layer of APis that does not exist anywhere else—oAuth. Open Authentication or oAuth provides and identity and access layer on top of API that gives end-users, and owner of personal data control over who access their data. Technology leaders in the private sector are all using oAuth to give platform users control over how their data is used in applications and systems. oAuth is the heartbeat of API security, giving API platforms a way to manage security, and how 3rd party developers access and put resources to use, in a way that gives control to end users. In the case of the Department of Education APIs, this means putting the parent and student at the center of who accesses, and uses their personal data, something that is essential to the future of the Department of Education.

How Will Policy Be Changed?
I'm not a policy wonk, nor will I ever be one. One thing I do know is you will never understand the policy implications in one RFI, nor will you change policy to allow for API innovation in one broad stroke--you will fail. Policy will have to be changed incrementally, a process that fits nicely with the iterative, evolutionary life cyce of API managment. The cultural change at Department of Education, as well as evolutionary policy change at the federal level will be the biggest benefits of APIs at the Department of Education. 

An Active API Platform At Department of Education Would Deliver What This RFI Is Looking For
I know it is hard for the Department of Education to see APIs as something more than a technical implementation, and you want to know, understand and plan everything ahead of time—this is baked into the risk averse DNA of government.  Even with this understanding, as I go through the RFI, I can’t help but be frustrated by the redundancy, bureaucracy, over planning, and waste that is present in this process. An active API platform would answer every one of your questions you pose, with much more precision than any RFI can ever deliver.

If the Department of Education had already begun evolving an API platform for all open data sets currently available on data.gov, the agency would have the experience in API design, deployment and management to address 60% of the concerns posed by this RFI. Additionally the agency would be receiving feedback from existing integrators about what they need, who they are, and what they are building to better serve students and institutions. Because this does not exist there will be much speculation about who will use Department of Education APIs, and how they will use them and better serve students. While much of this feedback will be well meaning, it will not be rooted in actual use cases, applications and existing implementations. An active API ecosystem answers these questions, while keeping answers rooted in actual integrations, centered around specific resources, and actual next steps for real world applications.

The learning that occurs from managing read-only API access, to low-level data, content and resources would provide the education and iteration necessary for the key staff at Department of Education to reach the next level, which would be read / write APIs, complete with oAuth level security, which would be the holy grail in serving students and achieving the mission of the Department of Education. I know I’m biased, because of my focus on APIs, but read / write access to all Department of Education resources over the web and via mobile devices, that gives full control to students, is the future of the agency. There is no "should we do APIs", there is only the how, and I’m afraid we are wasting time, and we need to just do it, and learn to ask these questions along the way.

There is proven technology and processes available to make all Department of Education data, content and resources available, allowing both read and write access in a secure way, that is centered around the student. The private sector is 14 years ahead of the government in delivering private sector resources in this way, and other government agencies are ahead of the Department of Education in doing this as well, but there is an opportunity for the agency to still lead and take action, by committing the resources necessary to not just deploy a single API, but internalize APIs in a way that will change the way learning occurs in the coming decades across all US institutions.


A. Information Gaps and Needs in Accessing Current Data and Aid Programs

1. How could data sets that are already publicly available be made more accessible using APIs? Are there specific data sets that are already available that would be most likely to inform consumer choice about college affordability and performance?

Not everyone has the resources download, process and put open datasets to use. APIs can make all of the publicly available datasets more available to the public, allowing for easy URL access, deployment of widgets, visualizations as well as integration with existing tools like Microsoft Excel. All datasets should have option of being published in this way, but ultimately the Dept. of Ed API ecosystem should speak to which datasets would be most high value, and warrant API access.

2. How could APIs help people with successfully and accurately completing forms associated with any of the following processes: FAFSA; Master Promissory Note; Loan Consolidation; entrance and exit counseling; Income-Driven Repayment (IDR) programs, 15 such as Pay As You Earn; and the Public Student Loan Forgiveness program?

APIs will help decouple each data point on a form. Introductory information, each questions, and other supporting resources can be broken up and delivered via any website, and mobile applications. Evolving a form into a linear, 2-dimensional form into an interactive application that people can engage with, providing the assistance needed to properly achieve the goals surrounding a form.

Each form initiative will have its own needs, and a consistent API platform and strategy from the department of Education will help identify each forms unique requirements, and the custom delivery of just the resources that are needed for a forms target audience.

3. What gaps are there with loan counseling and financial literacy and awareness that could be addressed through the use of APIs to provide access to government resources and content?

First, APIs can provide access to the content that educates students about the path they are about to embark on, before they do, via web and mobile apps they frequent already, not being required to visit the source site and learn. Putting the information students need into their hands, via their mobile devices will increase the reach of content and increase the chances that students will consume.

Second, APIs plus oAuth will give students access over their own educational finances, forcing them to better consider how they will manage all the relationships they enter into, the details of loans, grants and with the schools they attend. With more control over data and content, will come a forced responsibility in understanding and managing their finances.

Third, this process will open up students eyes to the wider world of online data and information, and that APIs are driving all aspects of their financial life from their banking and credit cards to managing their online credit score.

APIs are at the heart of all of the API driven digital economy, the gift that would be given to students when they first leave home, in the form of API literacy would carry with them throughout their lives, allowing them to better manage all aspects of their online and financial lives—and the Department of Education gave them that start.

4. What services that are currently provided by title IV student loan servicers could be enhanced through APIs (e.g., deferment, forbearance, forgiveness, cancellation, discharge, payments)?

A consistent API platform and strategy from the department of Education would provide the evolution of a suite of verified partners, such as title IV student loan services. A well planned partner layer within an ecosystem would allow student loan services to access data from students in real-time, with students having a say in who and how they have access to the data. These dynamics introduced by, and unique to API platforms that employ oAuth, provide new opportunities for partnerships to be established, evolve and even be terminated when not going well.

API platform using oAuth provide a unique 3-legged relationship between the data platform, 3rd party service providers and students (users), that can be adopted to bring in existing industry partners, but more importantly provide a rich environment for new types of partners to evolve, that can improve the overall process and workflow a student experiences.

5. What current forms or programs that already reach prospective students or borrowers in distress could be expanded to include broader affordability or financial literacy information?

All government forms and programs should be evaluated for the pros / cons of an API program. My argument within this RFI response will be focused on a consistent API platform and strategy from the department of Education. APIs should be be part of every existing program change, and new initiatives in the future.

B. Potential Needs to be Filled by APIs

1. If APIs were available, what types of individuals, organizations, and companies would build tools to help increase access to programs to make college more affordable?

A consistent API platform and strategy from the department of Education will have two essential components, partner framework, and service composition. A partner framework defines which external, 3rd party groups can work with Department of Education API resources. The service composition defines how these 3rd party groups can can access and ultimately use Department of Education API resources.

All existing groups that the Department of Education interacts with currently should be evaluated for where in the API partner framework they exists, defining levels of access for general public, student up to certified and trusted developer and business partnerships.

The partner framework and service composition for the Department of Education API platform should be applied to all existing individuals, organizations and companies, while also allow for new actors to enter the game, and potentially redefining the partner framework and add new formulas for API service composition, opening up the possibilities for innovation around Department of Education API resources.

2. What applications and features might developers, schools, organizations, and companies take interest in building using APIs in higher education data and services?

As with which Department of Education forms and programs might have APIs apply, which individuals, organizations and companies will use APIs, the only way to truly understand what applications might developers, schools, organizations and companies put APIs cannot be know, until it is place. These are the questions an API centric company or institution asks of its API platform in real-time. You can’t define who will use an API and how they will use it, it takes iteration and exploration before successful applications will emerge.

3. What specific ways could APIs be used in financial aid processes (e.g., translation of financial aid forms into other languages, integration of data collection into school or State forms)?

When a resource is available via an API, it is broken down into the smallest possible parts and pieces possible, allowing them to be re-used, and re-purposed into every possible configuration management. When you make form questions independently available via an API, it allows you to possible reorder, translate, and ask in new ways.

This approach works well with forms, allowing each entry of a form to be accessible, transferable, and open up for access, with the proper permissions and access level that is owned by the person who owns the format data. This opens up not just the financial aid process, but all form processes to interoperate with other systems, forms, agencies and companies.

With the newfound modularity and interoperability introduced by APIs, the financial aid process could be broken down, allowing parents to take part for their role, schools for theirs, and allow multiple agencies to be engaged such as IRS or Department of Veterans Affairs (VA). All of this allows any involved entity or system to do its part for the financial aid process, minimizing the friction throughout the entire form process, even year over year.

4. How can third-party organizations use APIs to better target services and information to low-income students, first-generation students, non-English speakers, and students with disabilities?

Again, this is a questions that should be asked in real-time of a Department of Education platform. Examples of how 3rd party organizations can better target services and information to students, is the reason for an API platform. There is no way to no this ahead of time, I will leave to domain experts to attempt at answering.

5. Would APIs for higher education data, processes, programs or services be useful in enhancing wraparound support service models? What other types of services could be integrated with higher education APIs?

A sensibly design,deployed, managed and evangelized API platform would establish a rich environment for existing educational services to be augmented, but also allow for entirely new types of services to be defined. Again I will leave to domain experts to speak of specific service implantations based upon their goals, and understanding of the space.

C. Existing Federal and Non-Federal Tools Utilizing APIs

1. What private-sector or non-Federal entities currently offer assistance with higher education data and student aid programs and processes by using APIs? How could these be enhanced by the Department’s enabling of additional APIs?

There are almost 10K public APIs available in the private sector. This should be viewed as a pallet for developers, and something that developers use as they are developing (painting) their apps (painting). It is difficult for developers to know what they will be painting with, without knowing what resources are available. The open API innovation process rarely is able to articulate what is needed, then make that request for resources—API innovations occurs when valuable, granular resources are available fro multiple sources, ad developers assemble them, and innovate in new ways.

2. What private-sector or non-Federal entities currently work with government programs and services to help people fill out government forms? Has that outreach served the public and advanced public interests?

Another question that should be answered by the Department of of Education, and providing us with the answers. How would you know this without a properly definitely partner framework? Stand up an API platform, and you will have the answer.

3. What instances or examples are there of companies charging fees to assist consumers in completing otherwise freely available government forms from other agencies? What are the advantages and risks to consider when deciding to allow third parties to charge fees to provide assistance with otherwise freely available forms and processes? How can any risks be mitigated?

I can't speak to what is already going on in the space, regarding companies charging feeds to consumers, I am not expert on the education space at this level. This is just such a new paradigm made possible via APIs and open data, there just aren’t that many examples in the space, built around open government data.

First, the partner tiers of API platforms help verify and validate individuals and organizations who are building applications and charging for services in the space. A properly design, managed and policed partner tier can assist in mitigating risk in the evolution of such business ecosystems.

Second API driven security layers using oAuth give access to end-users, allowing students to take control over which applications and ultimately service providers have access to their data, revoking when services are done or a provider is undesirable. With proper reporting and rating systems, policing of the API platform can be something that is done within the community, and the last mile of policing being done by the Department of Education.

Proper API management practices provide the necessary identity, access and control layers necessary to keep resources and end-users safe. Ultimately who has access to data, can charge fees, and play a role in the ecosystem is up to Department of education and end-users when applications are built on top of APIs.

4. Beyond the IRS e-filing example, what other similar examples exist where Federal, State, or local government entities have used APIs to share government data or facilitate participation in government services or processes - particularly at a scale as large as that of the Federal Student Aid programs?

This is a new, fast growing sector, and there are not a lot of existing examples, but there area few:

Open311
An API driven system that allows citizens to report and interact with municipalities around issues within communities. While Open311 is deployed in specific cities such as Chicago and Baltimore, it is an open source platform and API that can be deployed to serve any size market.

Census Bureau
The US Census provides open data and APIs, allowing for innovation around government census survey data, used across the private sector in journalism, healthcare, and many other ways. The availability of government census data is continually spawning new applications, visualizations and other expressions, that wouldn’t be realized or known, if the platform wasn’t available.

We The People
The We The People API allows for 3rd-Party integration with the White House Petition process. Currently only allowing for read only access to the information, and the petition process, but is possibly one way that write APIs will emerge in federal government.

There are numerous examples of open APIs and data being deployed in government, even from the Department of Education. All of them are works in progress, and will realize their full potential over time, maturation and much iteration and engagement with the public.

D. Technical Specifications

1. What elements would a read-write API need to include for successful use at the Department?

There are numerous building blocks can be employed in managing read-write APIs, but there are a couple that will be essential to successful read-write APIs in government:

Partner Framework
Defined access tiers for consumers of API data, with appropriate public, partner and private (internal) levels of access. All write methods are only accessible by partner and internal levels of access, requiring verification and certification of companies and individuals who will be building on top of API resources.

Service Management
The ability to compose many different types of API resource access, create service bundles that are made accessible to different levels of partners. Service management allows for identity and access management, but also billing, reporting, and other granular level control over how services are composed, accessed and managed.

Open Authentication (oAuth 2.0)
All data made available via Department of Education API platforms and involves personally identifiable information will require the implementation of an open authentication or oAuth security layer. oAuth 2.0 provides an identity layer for the platform, requiring developers to use token that throttle access to resources for applications, a process that is initiated, managed and revoked by end-users—providing the highest level of control over who has access to data, and what they can do with it, by the people who personal data is involved.

Federated API Deployments
Not all APIs should be deployed and managed within the Department of Education firewall. API platforms can be made open source so that 3rd party partners can deploy within their own environments. Then via a sensible partner framework, the Department of Education can decide which partners they should not just allow to write to APIs, but also pull data from their trusted systems and open API deployments.

APIs provide the necessary access to all of federal government API resources, and a sensible partner framework, service management layer in conjunction with oAuth will provide the necessary controls for a read / write API in government. If agencies are looking to further push risk outside the firewall, federated API deployments with trusted partners will have to be employed.

2. What data, methods, and other features must an API contain in order to develop apps accessing Department data or enhancing Department processes, programs, or services?

There are about 75 common building blocks for API deployments (http://management.apievangelist.com/building-blocks.html), aggregated after looking at almost 10K public API deployments. Each government API will have different needs when it comes to other supporting building blocks.

3. How would read-only and/or read-write APIs interact with or modify the performance of the Department’s existing systems (e.g., FAFSA on the Web)? Could these APIs negatively or positively affect the current operating capability of such systems? Would these APIs allow for the flexibility to evolve seamlessly with the Department’s technological developments?

There are always risks with API access to resources, but a partner framework, service management, oAuth, and other common web security practices these risks can be drastically reduce, and mitigated in real-time

Isolated API Deployments
New APIs should rarely be deployed and directly connected to existing systems. APIs can be deployed as an isolated interface, with an isolated data store. Existing systems can use the same API interface to read / write data into the system and keep in sync with existing internal systems. API developers will never have access to existing system and data stores, just isolated, defined API interfaces as part of a secure partner tier, only accessing the services they have permission to, and the end-user data that has been given access to by end-users themselves.

Federated Deployments
As described above, if government agencies are looking to further reduce risk, API deployments can be designed and deployed as open source software, allowing partners with the ecosystem to download and deploy. A platform partner framework can provide a verification and certification process for federal API deployments, allowing the Department of Education to decide who they will pull data from, reducing the risk to internal systems, providing a layer of trust for integration.

Beyond these approaches to deploying APIs, one of the biggest benefits of web API deployments is they use the same security as other government websites, just possessing an additional layer of securing determining who has access, and to what.

It should be the rare instance when an existing system will have an API deployed with direct integration. API automation will provide the ability to sync API deployments with existing systems and data stores.

4. What vulnerabilities might read-write APIs introduce for the security of the underlying databases the Department currently uses?

As stated above, there should be no compromise in how data is imported into existing databases at the Department of Education. It is up to the agency to decide which APIs they pull data from, and how it is updated as part of existing systems.

5. What are the potential adverse effects on successful operation of the Department’s underlying databases that read-write APIs might cause? How could APIs be developed to avoid these adverse effects?

As stated above, isolated and external, federated API deployments will decouple the risk from existing systems. This is the benefit of APIs, is they can deployed as isolated resources, then integration and interoperability, internally and externally is up to the consumer to decide what is imported and what isn’t.

6. How should APIs address application-to-API security?

Modern API partner framework, service management and oath provide the necessary layer to identify who has access, and what resources can be used by not just a company and user, but by each application they have developed.

Routing all API access through the partner framework plus associated service level, will secure access to Department of Education resources by applications, with user and app level logging of what was accessed and used within an application.

OAuth provides a balance to this application to API security layer, allowing the Department of Education to manage security of API access, developers to request access for their applications, but ultimately control is in the hand of end users to define which applications have access to their data.

7. How should the APIs address API-to-backend security issues? Examples include but are not limited to authentication, authorization, policy enforcement, traffic management, logging and auditing, TLS (Transport Layer Security), DDoS (distributed denial-of-service) prevention, rate limiting, quotas, payload protection, Virtual Private Networks, firewalls, and analytics.

Web APIs use the exact same infrastructure as websites, allowing for the re-use of existing security practices employed for websites. However APIs provide the added layer of security, logging, auditing and analytics provided through the lens of the partner framework, service composition and only limited by the service management tooling available.

8. How do private or non-governmental organizations optimize the presentation layer for completion and accuracy of forms?

Business rules. As demonstrated as part of a FAFSA API prototype, business rules for each form field, along with rejection codes can also be made available via an API resources, allowing for developers to build in a form validation layer into all digital forms.

After submission, and the first line of defense provide red by API developers building next generation forms, platform providers can provide further validation, review and ultimately a status workflow that allows forms to be rejected or accepted based upon business logic.

9. What security parameters are essential in ensuring there is no misuse, data mining, fraud, or misrepresentation propagated through use of read- only or read-write APIs?

A modern API service management layer allows the platform provider to see all API resources that are being access, by whom, and easily establish patterns for healthy usage, as well as patterns for misuse. When misuse is identified, service management allows providers to revoke access, and take action against companies and individuals.

Beyond the platform provider, APIs allow for management by end-users through common oAuth flows and management tools. Sometimes end-users can identify an app is misusing their data, even before a platform provider might. oAuth gives them the control to revoke access to their data, via the API platform.

oauth, combined with API service management tooling has allowed for a unique security environment in which the platform can easily keep operations healthy, but end-users and developers can help police the ecosystem as well. If platform providers give users the proper rating and reporting tools, they can help keep API and data consumers in check.

10. With advantages already built into the Department’s own products and services (e.g., IRS data retrieval using FAFSA on the Web), how would new, third-party API-driven products present advantages over existing Department resources?

While existing products and services developed within the department do provide great value, the Department of Education cannot do everything on their own. Because of the access the Department has, some features will be better by default, but this won’t be the case in all situations.

The Department of Education and our government does not have unlimited resources, and with access to ALL resources available via the department the private sector can innovate, helping share the load of delivering vital services. Its not whether or not public sector products and services are better than private sector or vice vera, it is about the public sector and private sector partnering wherever and whenever it make sense.

11. What would an app, service or tool built with read-write API access to student aid forms look like?

Applications will look like turbotax and tax act developed within the IRS ecosystem, and look like the tools developed by the Sunlight Foundation on top of government open data and APIs.

We will never understand what applications are possible until the necessary government resources are available. All digital assets should be open by default, with consistent API platform and strategy from the department of Education, and the platform will answer this question.

E. Privacy Issues

1. How could the Department use APIs that involve the use of student records while ensuring compliance with potentially applicable statutory and regulatory requirements, such as the Family Educational Rights and Privacy Act (20 U.S.C. § 1232g; 34 CFR Part 99) and the Privacy Act (5 U.S.C. § 552a and 34 CFR Part 5b)?

As described above the partner framework, service management and oAuth layer provides the control and logging necessary to execute and audit as part of any application statutory and regulatory requirement.

I can’t articulate enough how this layer provides a tremendous amount of control over how these resources are access, giving control to the involved parties who matter the most—end-users. All API traffic is throttled, measured and reviewed as part of service management, enforcing privacy that in a partnership between the Department of Education, API consumers and end-users.

2. How could APIs ensure that the appropriate individual has provided proper consent to permit the release of privacy-protected data to a third party? How can student data be properly safeguarded to prevent its release and use by third parties without the written consent often required?

As articulated above the partner framework, service management and oAuth address this. This is a benefit of API deployment, breaking down existing digital access, providing access and granular control, combined with oAuth and logging of all access—APIs take control to a new level.

oAuth has come to represent this new balance in security and control of digital resources, allowing the platform, developers and end-users to execute within their defined role on the platform. This balance introduced by APIs and oAuth, allow data to be safeguarded, while also opening up for the widest possible use in the next generation applications and other implementations.

3. How might read-only or read-write APIs collect, document, and track individuals’ consent to have their information shared with specific third parties?

oAuth. Period.

4. How can personally identifiable information (PII) and other financial information (of students and parents) be safeguarded through the use of APIs?

Access of personally identifiable information (PII) via Department of Education APIs will be controlled by students and their parents. The most important thing you can do to protect PII is to give the owner of that data, education about how to allow developer access to it in responsible ways that will benefit them.

APIs open up access, while oAuth will give the students and parents the control they need to integrate with apps, and existing system to achieve their goals, while retaining the greatest amount of over safeguarding their own data.

5. What specific terms of service should be enabled using API keys, which would limit use of APIs to approved users, to ensure that information is not transmitted to or accessed by unauthorized parties?

A well designed partner layer would define multiple level of access, combined with sensible service packages, will establish the terms of service levels that will be bundled with API keys and oAuth level identity and access to personally identifiable information.

Common approaches to deploying partner layers with appropriate service tiers, using oAuth have been well established over the last 10 years in the private sector. Controlling access to API resources at a granular level, providing the greatest amount of access that makes sense, while knowing who is access data and how they are using is what APIs are designed for.

6. What are the relative privacy-related advantages and disadvantages of using read-only versus read-write APIs for student aid data?

You will face many of the similar privacy concerns whether an API is read or write. If it is personably identifiable information, read or write access to the wrong parties violates a student's privacy. Just ensure that data is updated via trusted application providers is essential.

A properly defined partner layer will separate who has read and who has write access. Proper logging and versioning of data is essential to ensure data integrity, allowing end-users to manage their data via an application or system with confidence.

F. Compliance Issues

1. What are the relative compliance-related advantages and disadvantages of using read-only versus read-write APIs for student aid data?

APIs provide a single point of access to student aid data. With the implementation of proper partner framework, service management and oAuth every single action via this doorway is controlled and logged. When it comes to auditing ALL operations whether it is from the public, partners or internal, APIs excel in satisfying compliance concerns.

2. How can the Department prevent unauthorized use and the development of unauthorized products from occurring through the potential development of APIs? How might the Department enforce terms of service for API key holders, and prevent abuse and fraud by non-API key holders, if APIs were to be developed and made available?

As described above the partner framework, service management and oAuth will provide the security layer needed to manage 99% of potential abuse, but overall enforcement via the API platform is a partnership between the Department of Education, API consumers as well as end-users. The last mile of enforcement will be executed by the Department of Education, but it will be up to the entire ecosystem and platform to police and enforce in real-time.

3. What kind of burden on the Department is associated with enforcing terms and conditions related to APIs?

The Department of Education will handle the first line of defense, in defining partner tiers and service composition that wraps all access to APis. The Department will also be the last mile of decision making and enforcement when violations occur. The platform should provide the data needed by the department to make decision as well as the enforcement necessary in the form of API key and access revocation, and banning apps, individuals and business from the ecosystem.

4. How can the Department best ensure that API key holders follow all statutory and regulatory provisions of accessing federal student aid funds and data through use of third-party products?

First line of define to ensure that API key holders follow all statutory and regulatory provision will be verification and validation of partners upon registration, applications going into production and availability in application galleries and other directories in which students discover apps.

Second line of defense will be reporting requirements and usage patterns of API consumers and their apps. If applications regular meet self-reporting requirements and real-time patterns establishing healthy or unhealthy behavior, they can retain their certification. If partners fail to comply they will be restricted from the API ecosystem.

Last line of defense is the end-users, the students and parents. All end-users need to be educated regarding the control they have, given reporting and ranking tools that allow them file complaints and rank the applications that are providing quality services.

As stated several times, enforcement will be a community effort, something the Department of Education has ultimate control of, but requires giving the community agency as well.

5. How could prior consent from the student whom the data is about be provided for release of privacy- protected data to third party entities?

An API with oAuth layer is this vehicle. Providing the access, logging all transactions, and holding all partners to a quality of service. All the mechanism are there, in a modern API implementation, the access just needs to be defined.

6. How should a legal relationship between the Department and an API developer or any other interested party be structured?

I’m not a lawyer. I’m not a policy person. Just can’t contribute to this one.

7. How would a legal relationship between the Department and an API developer or any other interested party affect the Department’s current agreements with third-party vendors that operate and maintain the Department’s existing systems?

All of this will be defined in each partner tier, combined with appropriate service levels. With isolated API deployments, this should not affect currently implementations.

However a benefit of consistent API strategy is that existing vendors can access resources via APis, increasing the agility and flexibility of existing contracts. APIs are a single point of access, not just for the public, but 3rd party partners as well as internal access. Everyone involved can participate and receive benefits of API consumption.

8. What disclosures should be made available to students about what services are freely available in government domains versus those that could be offered at a cost by a third party?

A partner tier for the API platform will define the different levels of partners. Trusted, verified and certified partners will get different recommendation levels and access than lesser known services, and applications from 3rd party with lesser trusted levels of access.

9. If the Department were to use a third-party application to engage with the public on its behalf, how could the Department ensure that the Department follows the protocols of OMB Memorandum 10-23?

Again, the partner tier determines the level of access to the partner and the protocols of all OMB memorandum call be built in. Requiring all data, APIs and code is open sourced, and uses appropriate API access tiers showing how data and resources are accessed and put to use.

API service management provides the reporting necessary to support government audits and regulations. Without this level of control on top of an API, this just isn’t possible in a scalable way, that APIs plus web and mobile applications offer.

G. Policy Issues

1. What benefits to consumers or the Department would be realized by opening what is currently a free and single-point service (e.g., the FAFSA) to other entities, including those who may charge fees for freely-available services and processes? What are the potential unintended consequences?

Providing API access to government resources is an efficient and sensible use of taxpayers money, and reflect the mission of all agencies, not just the Department of Education. APIs introduce the agility and flexibility needed to deliver the next generation government application and services.

The economy in a digital age will require a real-time partnership between the public sector and the private sector, and APIs are the vehicle for this. Much like it has done for private sector companies like Amazon and Google, APIs will allow the government to create new services and products that serve constituents with the help of the private sector, while also stimulating job growth and other aspects of the economy.

APIs will not all be an up-side, each program and initiative will have its own policy problems and unintended consequences. One problem that plagues API initiatives is enough resources in the form of money and skilled works to make sure efforts are successful. Without the proper management, poorly executed APIs can open up huge security holes, introduce privacy concerns at a scale never imagined.

APIs need to be managed properly, with sensible real-time controls for keeping operations in check.

2. How could the Department ensure that access to title IV, HEA student aid programs truly remains free, even amidst the potential development of third-party apps that may charge a fee for assistance in participating in free government programs, products, and services with or without providing legitimate value-added services?

Partner Framework + Service Management = Quality of Service Across Platform

3. What other policy concerns should the Department consider with regard to the potential development of APIs for higher education data and student aid processes at the Department?

Not a policy or education expert, I will leave this to others to determine. Also something that should be built into API operations, and discovered on a program by program basis.

4. How would APIs best interact with other systems already in use in student aid processes (e.g., within States)?

The only way you will know is if you do it. How is the IRS-efile system helping with this, but it isn’t even a perfect model to follow. We will never know the potential here until a platform is stood up, and resources are made available. All signs point to APIs opening up a huge amount of interoperability between not just states and the federal government, but also with cities and counties.

5. How would Department APIs benefit or burden institutions participating in title IV, HEA programs?

If APIs aren’t given the proper resources to operate it can introduce security, privacy and support concerns that would not have been there before. A properly run API initiative will provide support, while an underfunded, undermanned initiative will just further burden institutions.

6. While the Department continues to enhance and refine its own processes and products (e.g., through improvements to FAFSA or the IDR application process), how would third-party efforts using APIs complement or present challenges to these processes?

These two things should not be separate. The internal efforts should be seen as just another partner layer within the API ecosystem. All future service and products developed internally within the Department of Education should use the same API infrastructure developed for partners and the public.

If APIs are not used internally, API efforts will always fail. APIs are not just about providing access to external resources, it is about opening up the Department to think about its resources in an external way that benefits the public, partners as well as within the government.


Secure API Deployment From MySQL, JSON and Google Spreadsheets With 3Scale

I'm doing a lot more API deployments from dead simple data sources since I started working in the federal government. As part of these efforts I'm working to put together a simple toolkit that newbies to the API world can use to rapidly deploy APIs as well.

A couple of weeks ago I worked through the simple, open API implementations, and this week I want to show how to secure access to the API by requiring an AppID and AppKey which will allow you to track on who has access to the API.

I'm using 3Scale API Management infrastructure to secure the demos. 3Scale has a free base offering that allows anyone to get up and running requiring API keys, analytics and other essentials with very little investment.

Currently I have four separate deployment blueprints done:

All of these samples are in PHP and uses the Slim PHP REST framework. They are meant to be working examples that you can use to seed your own API deployment.

You can find the entire working repository, including Slim framework at Github.


Lack of Transparency Is Healthcare.gov Biggest Bottleneck

If you pay attention to the news, you have probably heard about the technical trouble with the launch of the Affordable Care Act, 50 state marketplaces and the central Healthcare.gov site.

People across the country are encountering show-stopping bugs in the sign up process, and if you go to the healthcare.gov site currently, you get a splash page that states, "We have a lot of visitors on the site right now." If you stay on the page it will refresh every few seconds until, eventually you might get a successful registration form.

I worked at it for hours last night was finally able to get into the registration process, only to get errors several steps in, but eventually got through the flow and successfully registered for an account, scrutinizing the code and network activity behind the scenes as I went along.

There are numerous blog posts trying to break down what is going wrong with the Healthcare.gov registration process, but ultimately many of them are very superficial, making vague accusations of vendors involved, and the perceived technology at play. I think one of the better one's was A Programmer's Perspective On Healthcare.gov And ACA Marketplaces, by Paul Smith.

Late last night, the Presidential Innovation Fellows (PIF), led by round one PIF Phillip Ashlock(@philipashlock), set out to try and develop our own opinion about what is happening behind the scenes. Working our way through the registration process, trying to identify potential bottlenecks.

When you look at the flow of calls behind each registration page you see a myriad of calls to JavaScript libraries, internal and external services that support the flow. There definitely could have been more thought put into preparing this architecture for scaling, but a handful of calls really stands out:

https://www.healthcare.gov/marketplace/global/en_US/registration.js
https://www.healthcare.gov/ee-rest/ffe/en_US/MyAccountEIDMUnsecuredIntegration/createLiteEIDMAccount

The second URL pretty clearly refers to the Center for Medicare and Medicaid Services(CMS) Enterprise Identity Management (EIDM) platform, which provides new user registration, access management, identity lifecycle management, giving users of the Healthcare Exchange Plan Management can register and get CMS credentials. Where the registration.js appears handles much of the registration process.

Philip identified the createLiteEIDMAccount call as the most telling part of the payload and response, and would most likely be the least resilient portion of the architecture, standing out as a potentially severe bottleneck. The CMS EIDM platform is just one potential choke point, and isn't a bleeding edge solution, it is pretty straightforward enterprise architecture that may not have had adequate resources allocated to handle the load. I'm guessing underallocated server and application resources is playing a rampant role across Healthcare.gov operations.

Many of the articles I've read over the last couple days make reference to the front-end of Healthcare.gov in using Jekyll and APIs, and refer to the dangers of open washing, and technological solution-ism. Where this is most likely an under-allocated, classic enterprise piece of the puzzle that can't keep up. I do agree with portions of the open washing arguments, and specifically around showcasing the project as "open", when in reality the front-end is the only open piece, with the backend being a classic, closed architecture and process.

Without transparency into the entire stack of Healthcare.gov and the marketplace rollouts, it is not an open project. I don't care if any part of it is--making it open-washing. The teams in charge of the front-end were very transparent in getting feedback on the front-end implementation and publishing the code to Github for review. It isn't guaranteed, but if the entire backend stack followed the same approach, publishing technology, architectural approaches and load testing numbers throughout a BETA cycle for the project--things might have been different on launch day.

Transparency goes a long way into improving not just the technology and architecture, but can shed light on illnesses in the procurement, contracting and other business and political aspects of projects. Many technologists will default to thinking I'm talking about open source, open tools or open APIs, but in reality I'm talking about an open process.

In the end, this story is just opinion and speculation. Without any transparency into exactly what the backend architecture of Healthcare.gov and the marketplaces are, we have no idea of actually what the problem is. I'm just soapboxing my opinion like the authors of every other story published about this problem over the last couple days, making them no more factual than some of my other fictional pieces about this being an inside job or a cleverly disguised denial of service attack!


IRS Modernized e-File (MeF): A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy (DRAFT)

Download as PDF

The Internal Revenue Service is the revenue arm of the United States federal government, responsible for collecting taxes, the interpretation and enforcement of the Internal Revenue code.

The first income tax was assessed in 1862 to raise funds for the American Civil War, and over the years the agency has grown and evolved into a massive federal entity that collects over $2.4 trillion each year from approximately 234 million tax returns.

While the the IRS has faced many challenges in its 150 years of operations, the last 40 years have demanded some of the agency's biggest transformations at the hands of technology, more than any time since its creation.

In the 1970s, the IRS began wrestling with the challenge of modernizing itself using the latest computer technology. This eventually led to a pilot program in 1986 of an new Electronic Filing System (EFS), which aimed in part to gauge the acceptance of such a concept by tax preparers and taxpayers.

By the 1980s, tax collection had become very complex, time-consuming, costly, and riddled with errors, due to what had become a dual process of managing paper forms while also converting these into a digital form so that they could be processed by machines. The IRS despereatly needed to establish a solid approach that would enable the electronic submission of tax forms.

It was a rocky start for the EFS, and Eileen McCrady, systems development branch and later marketing branch chief, remembers, “Tax preparers were not buying any of it--most people figured it was a plot to capture additional information for audits." But by 1990, IRS e-file operated nationwide, and 4.2 million returns were filed electronically. This proved that EFS offered a legitimate approach to evolving beyond a tax collection process dominated by paper forms and manual filings.

Even Federal Agencies Can't Do It Alone

Even with the success of early e-file technology, the program did not get the momentum it needed without the support of two major tax preparation partnerships--H&R Block and Jackson-Hewitt. These helped change the tone of EFS efforts, making it more acceptable and appealing to tax professionals. It was clear that e-File needed to focus on empowering a trusted network of partners to submit tax forms electronically, sharing the load of tax preparation and filing with 3rd party providers. And this included not just the filing technology, but a network of evangelists spreading the word that e-File was a trustworthy and viable way to work with the IRS.

Bringing e-File Into The Internet Age

By 2000, Congress had passed IRS RRA 98, which contained a provision setting a goal of an 80% e-file rate for all federal tax and information returns. This, in effect, forced the IRS to upgrade the e-File system for the Internet age, otherwise they would not be able meet this mandate. A working group was formed, comprised of tax professionals and software vendors that would work with the IRS to design, develop and implement the Modernized e-File(MeF)-Program-Information) system which employed the latest Internet technologies, including a new approach to web services which used XML that would allow 3rd party providers to submit tax forms in a real-time, transactional approach (this differed from the batch submissions required in a previous version of the EFS).

Moving Beyond Paper One Form At A Time

Evolving beyond a 100 years of paper process doesn't happen overnight. Even with the deployment of the latest Internet technologies, you have to incrementally bridge the legacy paper processes to a new online, digital world. After the deployment of the MeF, the IRS worked year by year to add the myriad of IRS forms to the e-File web service, allowing software companies, tax preparers, and corporations to digitally submit forms into IRS systems over the Internet. Form by form, the IRS was being transformed from a physical document organization to a distributed network of partners that could submit digital forms through a secure, online web service.

Technological Building Blocks

The IRS MeF solution represents a new approach to using modern technology by the federal government in the 21st century Internet age. In the last 15 years, a new breed of Internet enabled software standards have emerged that enable the government to partner with the private sector, as well as other government agencies, in ways that were unimaginable just a decade ago.

Web Services

Websites and applications are meant for humans. Web services, also known as APIs, are meant for other computers and applications. Web services has allowed the IRS to open up the submission of forms and data into central IRS systems, while also transmitting data back to trusted partners regarding errors and the status of form submissions. Web services allow the IRS to stick with what it does best, receiving, filing and auditing of tax filings, while trusted partners can use web services to deliver e-Filing services to customers via custom developed software applications.

Web services are designed to utilize existing Internet infrastructure used for everyday web operations as a channel for delivering trusted services to consumers around the country, via the web.

An XML Driven Communication Flow

XML is a way to describe each element of IRS forms, and its supporting data. XML makes paper forms machine readable so that the IRS and 3rd party systems can communicate using a common language, allowing IRS to share a common set of logic around each form, then use what is known as schemas, to validate the XML submitted by trusted partners against a set of established business rules that provide enforcement of the IRS code. XML gives the ability for IRS to communicate with 3rd party systems using digital forms, applying business rules to reject or accept the submitted forms, which then can be stored in an official IRS repository in a way that can be viewed and audited by IRS employees (using stylesheets which make the XML easily readable by humans).

Identity and Access Management (IAM)

When you expose web services publicly over the Internet, secure authentication is essential. The IRS MeF system is a model for securing the electronic transmission of data between the government and 3rd party systems. The IRS has employed a design of the Internet Filing Application (IFA) and Application to Application (A2A) which are features of the Web Services-Interoperability (WS-I) security standards. Security of the MeF system is overseen by the IRS MITS Cyber Security organization which ensures all IRS systems receive, process, and store tax return data in a secure manner. MeF security involves an OMB mandated Certification and Accreditation (C&A) Process, requiring a formal review and testing of security safeguards to determine whether the system is adequately secured.

Business Building Blocks

To properly extend e-File web services to partners isn't just a matter of technology. There are numerous building blocks required that are more business than technical, ensuring a healthy ecosystem of web service partners. With a sensible strategy, web services need to be translated from tech to business, allowing partners to properly translate IRS MeF into e-filing products that will deliver required services to consumers.

Four Separate e-Filing Options

MeF provided the IRS with a way to share the burden of filing taxes with a wide variety of trusted partners, software developers and corporations who have their own software systems. However MeF is just one tool in a suite of e-File tools. These include Free File software that any individual can use to submit their own taxes, as well as free fillable digital forms that individuals can use if they do not wish to employ a software solution.

Even with these simple options, the greatest opportunities for individuals and companies is to use commercial tax software that walks one through what can be a complex process, or to depend on a paid tax preparer who employ their own commercial versions of tax software. The programmatic web service version of e-file is just one option, but it is the heart of an entire toolkit of software that anyone can put to use.

Delivering Beyond Technology

The latest evolution of the e-file platform has technology at heart, but it delivers much more than just the transmission of digital forms from 3rd party providers, in ways that also make good business sense:

  • Faster Filing Acknowledgements - Transmissions are processed upon receipt and acknowledgements are returned in near real-time, unlike the once or twice daily system processing cycles in earlier versions
  • Integrated Payment Option - Tax-payers can e-file a balance due return and, at the same time, authorize an electronic funds withdrawal from their bank accounts, with payments being subject to limitations of the Federal Tax Deposit rules
  • Brand Trust - Allowing MeF to evolve beyond just the IRS brand, allowing new trusted commercial brands to step up and deliver value to consumer, like TurboTax and TaxAct.

Without improved filing results for providers and customers, easier payment options and an overall set of expectations and trust, MeF would not reach the levels of e-Filing rates mandated by Congress. Technology might be the underpinning of e-File, but improved service delivery is the thing that will seal the deal with both providers and consumers.

Multiple Options for Provider Involvement

Much like the multiple options available for tax filers, the IRS has established tiers of involvement for partners to be involved with the e-File ecosystem. Depending on the model and capabilities, e-File providers can step up and be participate in multiple ways:

  • Electronic Return Originators (EROs) - ERO prepare returns for clients or have collected returns from taxpayers who have prepared their own, then begin the electronic transmission of returns to the IRS
  • Intermediate Service Providers - These providers process tax return data, that originate from an ERO or an individual taxpayer, and forward to a transmitter.
  • Transmitters - Transmitters are authorized to send tax return data directly to the IRS, from custom software that connect directly with the IRS computers
  • Online Providers - Online providers are a type of transmitter that sends returns filed from home by taxpayers using tax preparation software to file common forms
  • Software Developers - write the e-file software programs that follow IRS specifications for e-file.
  • Reporting Agents - An accounting service, franchiser, bank or other person that is authorized to e-file Form 940/941 for a taxpayer.

The IRS has identified the multiple ways it needed help from an existing, evolving base of companies and organizations. The IRS has been able to design its partner framework to best serve its mission, while also delivering the best value to consumers, in a way that also recognizes the incentives needed to solicit participation from the private sector and ensure efforts are commercially viable.

Software Approval Process

IRS requires all tax preparation software used for preparing electronic returns to pass the requirements for Modernized e-File Assurance Testing (ATS). As part of the process software vendors notify IRS via an e-help Desk, that they plan to commence testing, then provide a list of all forms that they plan to include in their tax preparation software, but do not require that vendors support all forms. MeF integrators are allowed to develop their tax preparation software based on the needs of their clients, while using pre-defined test scenarios to create test returns that are formatted in the specified XML format. Software integrators then transmit the XML formatted test tax returns to IRS, where an e-help Desk assister checks data entry fields on the submitted return. When IRS determines the software correctly performs all required functions, the software is approved for electronic filing. Only then are software vendors allowed to publicly market their tax preparation software as approved for electronic filing -- whether for usage by corporations, tax professionals and individual users.

State Participation

Another significant part of the MeF partnership equation is providing seamless interaction with the electronic filing of both federal and state income tax returns at the same time. MeF provides the ability for partners to submit both federal and state tax returns in the same "taxpayer envelope", allowing the IRS to function as an "electronic post office" for participating state revenue services -- certainly better meeting the demands of the taxpaying citizen. The IRS model provides an important aspect of a public / private sector partnership with the inclusion of state participation. Without state level participation, any federal platform will be limited in adoption and severely fragmented in integration.

Resources

To nurture an ecosystem of partners, it takes a wealth of resources. Providing technical, how-to, guides, templates and other resources for MeF providers is essential to the success of the platform. Without proper support, MeF developers and companies are unable to keep up with the complexities and changes of the system. The IRS has provided the resources needed for each step of the e-Filing process, from on-boarding, to how to understanding the addition of the latest forms, and changes to the tax code.

Market Research Data

Transparency of the MeF platform goes beyond individual platform operations, and the IRS acknowledges this important aspect of building an ecosystem of web service partners. The IRS provides valuable e-File market research data to partners by making available e-file demographic data and related research and surveys. This important data provides valuable insight for MeF partners to use in their own decision making process, but also provides the necessary information partners need to educate their own consumers as well as the general public about the value the e-File process delivers. Market research is not just something the IRS needs for its own purposes; this research needs to be disseminated and shared downstream providing the right amount of transparency that will ensure healthy ecosystem operations.

Political Building Blocks

Beyond the technology and business of the MeF web services platform, there are plenty of political activities that will make sure everything operates as intended. The politics of web service operations can be as simple as communicating properly with partners, providing transparency, or all the way up to security, proper governance of web service, and enforcement of federal laws.

Status

The submission of over 230 million tax filings annually requires a significant amount of architecture and connectivity. The IRS provides real-time status of the MeF platform for the public and partners, as they work to support their own clients. Real-time status updates of system availability keeps partners and providers in tune with the availability of the overall system, allowing them to adjust availability with the reality of supporting such a large operation. Status of availability is an essential aspect of MeF operations and overall partner ecosystem harmony.

Updates

An extension of MeF platform status is the ability to keep MeF integrators up-to-date on everything to do with ongoing operations. This includes providing alerts when the platform needs to tune-in platform partners to specific changes with tax law, resource additions, or other relevant news of operations. The IRS also provides updates via an e-newsletter, providing a more asynchronous way for the IRS MeF platform to keep partners informed about ongoing operations.

Updates over the optimal partner channels are an essential addition to real-time status and other resources that are available to platform partners.

Roadmap

In addition to resources, status and regular updates of platform status of the overall MeF system, the IRS provides insight into where the platform is going next, keeping providers apprised with what is next for the e-File program. Establishing and maintaining the trust of MeF partners in the private sector is constant work, and requires a certain amount of transparency -- allowing partners to anticipate what is next and make adjustments on their end of operations. Without insight into what is happening in the near and long term future, trust with partners will erode and overall belief in the MeF system will be disrupted, unraveling over 30 years of hard work.

Governance

The Modernized e-File (MeF) programs go through several stages of review and testing before they are used to process live returns. When new requirements and functionality are added to the system, testing is performed by IRS's software developers and by IRS's independent testing organization. These important activities ensure that the electronic return data can be received and accurately processed by MeF systems. Every time an IRS tax form is changed and affects the XML schema, the entire development and testing processes are repeated to ensure quality and proper governance.

Security

Secure transmissions by 3rd parties with the MeF platform is handled by the Internet Filing Application (IFA) and Application to Application (A2A), which are part of the IRS Modernized System Infrastructure, providing access to trusted partners through the Registered User Portal (RUP). Transmitters using IFA are required to use their designated e-Services user name and password in order to log into the RUP. Each transmitter also establishes a Electronic Transmitter Identification Number (ETIN) prior to transmitting returns. Once the transmitter successfully logs into the RUP, a Secure Socket Layer (SSL) Handshake Protocol allows the RUP and transmitter to authenticate each other, and negotiate an encryption algorithm, including cryptographic keys before any return data is transmitted. The transmitter’s and the RUP negotiate a secret encryption key for encrypted communication between the transmitter and the MeF system. As part of this exchange, MeF will only accommodate one type of user credentials for authentication and validation of A2A transmitters; username and X.509 digital security certificate. Users must have a valid X.509 digital security certificate obtained from an IRS authorized Certificate Authority (CA), such as like VeriSign or IdenTrust, then have their certificates stored in the IRS directory using an Automated Enrollment process.

The entire platform is accredited by the Executive Level Business Owner, who is responsible for the operation of the MeF system, with guidance provided by the National Institute of Standards (NIST). The IRS MITS Cyber Security organization and the business system owner are jointly responsible and actively involved in completing the IRS C&A Process for MeF, ensuring complete security of all transmissions with MeF over the public Internet.

A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy

The IRS MeF platform provides a technological blueprint that other federal agencies can look to when exposing valuable data and resources to other agencies as well as the private sector. Web services, XML, and proper authentication can open up access and interactions between trusted partners and the public in ways that were never possible prior to the Internet age.

While this web services approach is unique within the federal government, it is a common way to conduct business operations in the private sector -- something widely known as Service Oriented Architecture (SOA), an approach that is central to a healthy enterprise architecture. A services oriented approach allows organizations to decouple resources and data and open up very wide or granular levels of access to trusted partners. The SOA approach makes it possible to submit forms, data, and other digital assets to government, using XML as a way to communicate and validate information in a way that supports proper business rules, wider governance, and the federal law.

SOA provides three essential ingredients for public and private sector partnership:

  • Technology - Secure usage of modern approaches to using compute, storage and Internet networking technology in a distributed manner
  • Business - Adherence to government lines of business, while also acknowledging the business needs and interest of 3rd party private sector partners
  • Politics - A flexible understanding and execution of activities involved in establishing a distributed ecosystem of partners, and maintaining an overall healthy balance of operation

The IRS MeF platform employs this balance at a scale that is unmatched in federal government currently. MeF provides a working blueprint can be applied across federal government, in areas ranging from the veterans claims process to the financial regulatory process.

The United States federal government faces numerous budgetary challenges and must find new ways to share the load with other federal and state agencies as well as the private sector. A SOA approach like MeF allows the federal government to better interact with existing contractors, as well as future contractors, in a way that provides better governance, while also allowing for partnership with the private sector in ways that goes beyond simply contracting. The IRS MeF platform encourages federal investment in a self-service platform that enable trusted and proven private sector partners to access IRS resources in predefined ways -- all of which support the IRS mission, but provide enough incentive that 3rd party companies will invest their own money and time into building software solutions that can be fairly sold to US citizens.

When an agency builds an SOA platform, it is planting the seeds for a new type of public / private partnership whereby government and companies can work together to deliver software solutions that meet a federal agency's mission and the market needs of companies. This also delivers value and critical services to US citizens, all the while reducing the size of government operations, increasing efficiencies, and saving the government and taxpayers money.

The IRS MeF platform represents 27 years of the laying of a digital foundation, building the trust of companies and individual citizens, and properly adjusting the agency's strategy to work with private sector partners. It has done so by employing the best of breed enterprise practices from the private sector. MeF is a blueprint that cannot be ignored and deserves more study, modeling, and evangelism across the federal government. This could greatly help other agencies understand how they too can employ an SOA strategy, one that will help them better serve their constituents.

You Can View, Edit, Contribute Feedback To This Research On Github


API Issue Management Using Github

Github should be the center of your API operations, with the most obvious use being for SDK repositories, but Github offers a lot of other valuable tools that you can use to manage your API platform.

One great use of Github is as an API issue management tool. The Github issue management system allows you to easily accept issue reports from your API community and apply labels to organize them into appropriate categories, for processing by your development team.

To setup Github Issue Management for your API, just create a new repository, but you won't actually being pushing any code, you will just be using it as a container for running issue management. Think of it as repository for your API itself.

Once setup, you can link the issue management page directly from your API area, allowing users to actively submit issues, comment and potentially be part of the API product development cycle.


If there is an API management related story you'd like me to know about, you can submit as Github issue for this research project and I will consider adding as part of my research.