RSS

API Management News

These are the news items I've curated in my monitoring of the API space that are related to managing APIs and thought worth enough to include in my research. I'm using all of these links to better understand how APIs are being managed across a diverse range of implementations.

Sadly Stack Exchange Blocks API Calls Being Made From Any Of Amazon's IP Block

I am developing an authentication and access layer for the API Gallery that I am building for Streamdata.io, while also federating it for usage as part of my API Stack research. In addition to building out these catalogs for API discovery purposes, I’m also developing a suite of tools that allow users to subscribe to different topics from popular sources like GitHub, Reddit, and Stack Overflow (Exchange). I’ve been busy adding one or two providers to my OAuth broker each week, until the other day I hit a snag with the Stack Exchange API.

I thought my Stack Exchange API OAuth flow had been working, it’s been up for a few months, and I seem to remember authenticating against it before, but this weekend I began getting an error that my IP address was blocked. I was looking at log files trying to understand if I was making too many calls, or some other potential violation, but I couldn’t find anything. Eventually I emailed Stack Exchange to see what their guidance once, to which I got a prompt reply:

“Yes, we block all of Amazon’s AWS IP addresses due to the large amount of abuse that comes from their services. Unfortunately we cannot unblock those addresses at this time.”

Ok then. I guess that is that. I really don’t feel like setting up another server with another provider just so I can run an OAuth server from there. Or, maybe I guess I might have to if I expect to offer a service that provides OAuth integration with Stack Exchange. It’s a pretty unfortunate situation that doesn’t make a whole lot of sense. I can understand adding another layer of white listing for developers, pushing them to add their IP address to their Stack Exchange API application, and push us to justify that our app should have access, but blacklisting an entire cloud provider from accessing your API is just dumb.

I am going to weigh my options, and explore what it will take to setup another server elsewhere. Maybe I will start setting up individual subdomains for each OAuth provider I add to the stack, so I can decouple them, and host them on another platform, in another region. This is one of those road blocks you encounter doing APIs that just doesn’t make a whole lot of sense, and yet you still have to find a work around–you can’t just give in, despite API providers being so heavy handed, and not considering the impact of the moves on their consumers. I’m guessing in the end, the Stack Exchange API doesn’t fit into their wider business model, which is something that allows blind spots like this to form, and continue.


Justifying My Existence In Your API Sales And Marketing Funnel

I feel like I’m regularly having to advocate for my existence, and the existence of developers who are like me, within the sales and marketing funnel for many APIs. I sign up for a lot of APIs, and have the pleasure of enjoy a wide variety of on-boarding processes for APIs. Many APIs I have no problem signing up, on-boarding, and beginning to make calls, while others I have to just my existence within their API sales and marketing funnel. Don’t get me wrong, I’m not saying that I shouldn’t be expected to justify my existence, it is just that many API providers are setup to immediately discourage, create friction for, and dismiss my class of API integrator–that doesn’t fit neatly into the shiny big money integration you have defined at the bottom of your funnel.

I get that we all need to make money. I have to. I’m actually in the business of helping you make money. I’m just saying that you are missing out on a significant amount of opportunity if you only focus on what comes out the other side of your funnel, and discount the nutrients developers like me can bring to your funnel ecosystem. I’m guessing that my little domain apievangelist.com does return the deal size scope you are looking for, but I think you are putting too much trust into the numbers provided to you by your business intelligence provider. I get that you are hyper focused on making the big deals, but you might be leaving a big deal on the table by shutting out small fish, who might have oversized influence within their organization, government agency, or within an industry. Your business intelligence is focusing on the knowns, and doesn’t seem very open to considering the unknowns.

As the API Evangelist I have an audience. I’ve been in business since 2010, so I’ve built up an audience of enterprise folks who read what I write, and listen to “some” of what I say. I know people like me within the federal government, within city government, and across the enterprise. Over half the people I know who work within the enterprise, helping influence API decisions, are also kicking the tires of APIs at night. Developers like us do not always have a straightforward project, we are just learning, understanding, and connecting the dots. We don’t always have the ready to go deal in the pipeline, and are usually doing some homework so that we can go sell the concept to decision makers. Make sure your funnel doesn’t keep us out, run us away, or remove channels for our voice to be heard.

In a world where we focus only on the big deals, and focus on scaling and automating the operation of platforms, we run the risk of losing ourselves. If you are only interested in landing those big customers, and achieving the exit you desire, I understand. I am not your target audience. I will move. It also means that I won’t be telling any stories about what you are doing, building any prototypes, and generally amplifying what you are doing on social media, and across the media landscape. Providing many of the nutrients you will need to land some of the details you are looking to get, generating the internal and external buzz needed to influence the decision makers. Providing real world use cases of why your API-driven solution is the one an enterprise group should be investing in. Make sure you aren’t locking us out of your platform, and you are investing the energy into getting to know your API consumers, more about what their intentions are, and how it might fit into your larger API strategy–if you have one.


Being Open Is More About Being Open So Someone Can Extract Value Than Open Being About Any Shared Value

One of the most important lessons I’ve learned in the last eight years, is that when people are insistent about things being open, in both accessibility, and cost, it is often more about things remaining open for them to freely (license-free) to extract value from it, that it is ever about any shared or reciprocal value being generated. I’ve fought many a battle on the front lines of “open”, leaving me pretty skeptical when anyone is advocating for open, and forcing me to be even critical of my own positions as the API Evangelist, and the bullshit I peddle.

In my opinion, ANYONE wielding the term open should be scrutinized for insights into their motivations–me included. I’ve spend eight years operating on the front line of both the open data, and the open API movements, and unless you are coming at it from the position of a government entity, or from a social justice frame of mind, you are probably wanting open so that you can extract value from whatever is being opened. With many different shades of intent existing when it comes to actually contributing any value back, and supporting the ecosystem around whatever is actually being opened.

I ran with the open data dogs from 2008 through 2015 (still howl and bark), pushing for city, county, state, and federal government open up data. I’ve witnessed how everyone wants it opened, sustained, maintained, and supported, but do not want to give anything back. Google doesn’t care about the health of local transit, as long as the data gets updated in Google Maps. Almost every open data activist, and data focused startup I’ve worked with has a high expectation for what government should be required to do, and want very low expectations regarding what should be expected of them when it comes to pay for commercial access, sharing enhancements and enrichments, providing access to usage analytics, and be observable and open to sharing access to end-users of this open data. Libertarian capitalism is well designed to take, and not give back–yet be actively encouraging open.

I deal with companies, organizations, and institutions every day who want me to be more open with my work. Are more than happy to go along for the ride when it comes to the momentum built up from open in-person gatherings, Meetups, and conference. Always be open to syndicating data, content, and research. All while working as hard as possible to extract as much value, and not give anything back. There are many, many, many companies who have benefitted from the open API work that I, and other evangelist in the space do on a regular basis, without ever considering if they should support them, or give back. I regularly witness partnerships scenarios in all of the API platforms I monitor, where the larger more proprietary and successful partner extracts value from the smaller, more open and less proven partner. I get that some of this is just the way things are, but much of it is about larger, well-resourced, and more closed groups just taking advantage of smaller, less-resourced, and more open groups.

I have visibility into a number of API platforms that are the targets of many unscrupulous API consumers who sign up for multiple accounts, do not actively communicate with platform owners, and are just looking for a free hand out at every turn. Making it very difficult to be open, and often times something that can also be very costly to maintain, sustain, and support. Open isn’t FREE! Publicly available data, content, media, and other resources cost money to operate. The anti-competitive practices of large tech giants setting the price so low for common digital resources have set the bar so low, for so long, it has change behaviors and set unrealistic expectations as the default. Resulting in some very badly behaved API ecosystem players, and ecosystems that encourage and incentivize bad behavior within specific API communities, but also is something that spreads from provider to provider. Giving APIs a bad name.

When I come across people being vocal about some digital resource being open, I immediately begin conducting a little due diligence on who they are. Their motivations will vary depending on where the come from, and while there are no constants, I can usually tell a lot about someone whether they come from a startup ecosystem, the enterprise, government, venture capital, or other dimensions of ur reality that the web has reached into recently. My self-appointed role isn’t just about teaching people to be more “open” with their digital assets, it is more about teaching people to be more aware and in control over their digital assets. Because there are a lot of wolves in sheeps clothing out there, trying to convince you that “open” is an essential part of your “digital transformation”, and showcasing all the amazing things that will happen when you are more “open”. When in reality they are just interested in you being more open so that they can get their grubby hands on your digital resources, then move on down the road to the next sucker who will fall for their “open” promises.


The Path To Production For Department of Veteran Affairs (VA) API Applications

This post is part of my ongoing review of the Department of Veteran Affairs (VA) developer portal and API presence, moving on to where I take a closer look at their path to production process, and provide some feedback on how the agency can continue to refine the information they provide to their new developers. Helping map out the on-boarding process for any new developer, ensuring they are fully informed about what it will take to develop an application on top of VA APIs, and move those application(s) from a developer state to a production environment, and actually serving veterans.

Beginning with the VA’s base path to production template on GitHub, then pulling in some elements I found across the other APIs they have published to developer.va.gov, and finishing off with some ideas of my own, I shifted the outline for the path to production to look something like this:

  • Background - Keeping the background of the VA API program.
  • [API Overview] - Any information relevant to specific API(s).
  • Applications - One right now, but eventually several applications, SDK, and samples.
  • Documentation - The link, or embedded access to the API documentation, OpenAPI definition, and Postman Collection.
  • Authentication - Overview of how to authenticate with VA APIs.
  • Development Access - Provide an overview of signing up for development access.
  • Developer Review - What is needed to become a developer.
    • Individual - Name, email, and phone.
    • Business - Name, URL.
    • Location - In country, city, and state.
  • Application Review - What is needed to have an application(s).
    • Terms of Service - In alignment with platform TOS.
    • Privacy Policy - In alignment with platform TOS.
    • Rate Limits - Aware of the rate limits that are imposed. Production Access - What happens once you have production access. Support & Engagement - Using support, and expected levels of engagement. Service Level Agreement - Platform working to meet an SLA governing engagement. Monthly Review - Providing monthly reviews of access and usage on platform. Annual Audits - Annual audits of access, and usage, with developer and application reviews.

I wanted to keep much of the content that the VA already had up there, but I also wanted to reorganize things a little bit, and make some suggestions for what might be next. Resulting in a path production section that might look a little more like this.


Department of Veteran Affairs (VA) API Path To Production

Background

The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem. We are in the alpha stage of the project, wherein we provide one API that enables parties external to the VA to submit of VBA forms and supporting documentation via PDF on behalf of Veterans.

[API Specific Overview]

Applications

Take a look at the sample code and documentation we have on GitHub at https://github.com/department-of-veterans-affairs/vets-api-clients. We will be developing more starter applications, developing open source SDKs and code samples, while also showcasing the work of other API developers in the near future–check back often.

Documentation

Spend some time with the documentation for the API(s) you are looking to use. Make sure the VA has the resources you will need to make your application work, before you sign up for a developer account, and submit your application for review.

Authentication

VA uses token-based authentication. Clients must provide a token with each HTTP request as a header called apiKey. This token is issued by VA. To receive a developer token to access this API in a test environment, please request access.

Development Access

All new developers must sign up for development access to the VA API platform, providing a minimal amount of information about yourself, business or organization you represent, where you operate in the United States, and the application you will be developing.

Developer Review

You will provide the VA with details about yourself, the business you work for, and your location. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Individual Information
    • Name - Your name.
    • Email - You email address.
    • Phone - Your phone number.
  • Business Information
    • Name - Your business or organizational name.
    • Website - Your business or organization website.
  • Location Information
    • City - The city where you operate.
    • State - The state where you operation.
    • Country - Acknowledge that you operate within the US.

We will take a look at all your details, then verify you and your business or organization as quickly as possible. Once you hear back from us via email, you will either be declined, or we will send you a Client ID and Secret for access the VA API development environment. When you are ready, you can submit your application for review by the VA team.

Application Review

You will provide us with details about the application you are developing, helping us understand the type of application you will be providing to veterans. Submitting the following information as a GitHub issue, or via email for more privacy:

  • Name - The name of your application.
  • Details - Details about your application.
  • Terms of Service - Your application is in alignment with ur terms of service.
  • Privacy Policy - Your application is in alignment with our privacy policy
  • Rate Limits - You are aware of the rate limits imposed on your application.
  • Working Demo - A working demo of the application will be needed for the review.
  • Code Review - A code review will be conducted when application goes to production.

We will take a look at all your details, and contact you about scheduling a code review. Once all questions about your applications are answered, and a code review has been conducted you will be notified letting you know if your application is accepted or not.

Production Access

Once approved for production access, you will receive an email from the VA API team notifying you of your application’s status. You will receive a new Client ID and Secret for your application for use in production, allowing use the base URL api.vets.gov instead of dev-api.vets.gov, and begin access live data.

Support & Engagement

The VA will be providing support using GitHub issues, and via email. All developers will be required to engage through these channels to be able to actively engage with VA API operations, to be able to maintain access to VA APIs

Service Level Agreement

The VA will be providing a service level agreement for each of the APIs we provide, committing to a certain quality of service, support, and communication around the APIs you will be integrating your applications with.

Monthly Review

Each month developers will receive an email from the VA regarding your access and usage. Providing a summary of our engagement. A response is required to maintain an active status as a VA developer, and application. After 90 days of no response, all developer or production application keys will be revoked, until contact is made. Keeping all applications active, with a responsive administrator actively managing things.

Annual Audits

Each year developers will receive an email from a VA API audit representative, reviewing your access, usage, and conducting a fresh developer review, as well as review application, access tokens, and end-usage. Ensuring that all applications operating in a production environment continually meet the expected security, privacy, support, and operational standards.


It Is Not Just A Path, But Journey I’m trying to shift the narrative for the path to production into being a more complete picture of the journey. What is expected of the developers, their applications, as well as setting the table of what can be expected of the VA. I don’t expect the VA to bite on all of these suggestions, but I can’t help but put them in there when they are relevant, and I have the opportunity ;-)

Decouple Developer And Application(s) Some of the reasons I separated the developer review from the application review is so that a developer could sign up and immediately get a key to kick the tires and begin developing. When ready, which many will never be, they can submit an application for potential production access. Developers might end up having multiple applications, so if we can decouple them early on, and allow all developers to have a default application in the development environment, but also be able to have multiple applications in a production environment, I feel things will be more scalable.

Not Forgetting The Path After Production I wanted to make sure and put in some sort of process that would help ensure both the VA, and developers are investing in ongoing management of their engagement. Ensuring light monthly reviews of ALL applications using the APIs, and pushing for developers and the VA to stay engaged. While also making sure all developers and applications get annual reviews, preventing what happened with the Cambridge Analytica / Facebook situation, where a malicious application gets access to such a large amount of data, without any regular review. Making sure the VA isn’t forgetting about applications once they are in a production state. Pushing this to be more than just about achieving production status with an application, and also continuing to ensure it is serving end-users, the veterans.

Serving Veterans With Your Application To close, I’d say that calling this a path to production doesn’t do it justice. It should be guide to being able to serve a veteran with your application. Acknowledging that it will take a significant amount of development before your application will be ready, but also that the VA will work to review your developer account, and your application, and ensure that VA APIs, and the applications that depend on them will operate as expected, and always be in service of the veteran. Something that will require a certain level of rigor when it comes to the development, certification, and support of applications across the VA API ecosystem.


An API Value Generation Funnel With Metrics

I’ve had several folks asking me to articulate my vision of an API centric “sales” funnel, which technically is out of my wheelhouse in the sales and marketing area, but since I do have lots opinions on what a funnel should look like for an API platform, I thought I’d take a crack at it. To help articulate what is in my head I wanted to craft a narrative, as well as a visual to accompany how I like to imagine a value generation funnel for any API platform.

I envision a API-driven value generation funnel that can be tipped upside down, over and over, like an hour glass, generating value is it repeatedly pushes API activity through center, driven by a healthy ecosystem of developers, applications, and end-users putting applications to work / use. Providing a way to generate awareness and engagement with any API platform, while also ensuring a safe, reliable, and secure ecosystem of applications that encourage end-user adoption, engagement, and loyalty–further expanding on the potential for developers to continue developing new applications, and enhancing their applications to better serve end-users.

I am seeing things in eleven separate layers, something I’ll keep shifting and adjusting in future iterations, but I just wanted to get a draft funnel out the door:

  • Layer One - Getting folks in the top of the funnel.
    • Awareness - Making people aware of the APIs that are available.
    • Engagement - Getting people engaged with the platform in some way.
    • Conversation - Encouraging folks to be part of the conversation.
    • Participation - Getting developers participating on regular basis.
  • Layer Two
    • Developers - Getting developers signing up and creating accounts.
    • Applications - Getting developers signing up and creating applications.
  • Layer Three
    • Sandbox Activity - Developers being active wthin the sandbox environment.
  • Layer Four
    • Certifed Developers - Certifying developers in some way to know who they are.
    • Certified Application - Certifying applications in some way to ensure quality.
  • Layer Five
    • Production Activity - Incentivizing production applications to be as active as possible.
  • Layer Six
    • Value Generation (IN / OUT) - Driving the intended behavior from all applications.
  • Layer Seven
    • Operational Activity - Doing what it takes internally to properly support applications.
  • Layer Eight
    • Audit Developers - Make sure there is always a known developer behind the application.
    • Audit Applications - Ensure the quality of each application with regular audits.
  • Layer Nine
    • Showcase Developers - Showcase developers as part of your wider partner strategy.
    • Showcase Applications - Showcase and push for application usage across an organization.
  • Layer Ten
    • Loyalty - Develop loyal users by delivering the applications that user are needing.
    • End-Users - Drive end-user growth by providing the applications end-users need.
    • Engagement - Push for deeper engagement with end-users, and the applications they use.
    • End-Usage - Incentivize the publishing and consumption of all platform resources.

I’m envisioning a funnel that you can turn on its head over and over and generate momentum, and kinetic energy, with the right amount of investment–the narrative for this will work in either direction. Resulting in a two-sided funnel both working in concert to generate value in the middle.

To go along with this API value generation funnel, I’m picturing the following metrics being applied to quantify what is going on across the platform, and the eleven layers:

  • One - Unique visitors, page views, RSS subscribers, blog comments, tweets, GitHub follows, forks, and likes.
  • Two - New developers and new applications.
  • Three - API calls on sandbox API resources.
  • Four - New certified developers and applications.
  • Five - API calls in the production API resources.
  • Six - GET, POST, PUT, DELETE on different types of resources.
  • Seven - Support requests, communication, and other new resources.
  • Eight - Number of developers and applications audited.
  • Nine - Number of new and existing developers and applications showcased.
  • Ten - Number of end-users, sessions, page views, and other activity.

Providing a stack of metrics you can use to understand how well you are doing within each layer, understanding not just the metrics for a single area of your operations, but how well you are doing at building momentum, and increasing value generation. I hesitate to call this a sales funnel, because sales isn’t my jam. It is also because I do not see APIs as something you always sell–sometimes you want people contributing data and content into a platform, and not always just consuming resources. A well balanced API platform is generating value, not just selling API calls.

I am not entirely happy with this API value generation funnel outline and diagram, but it is a good start, and gets me closer to what I’m seeing in my head. I’ll let it simmer for a few weeks, engage in some conversations with folks, and then take another pass at refining it. Along the way I’ll think about how I would apply to my own API platform, and actually take some actions in each area, and begin fleshing out my own funnel. I’m also going to be working on a version of it with the CMO at Streadmata.io, and a handful of other groups I’m working with on their API strategy. The more input I can get from a variety of API platforms, the more refined I can make this outline and visual of my suggested API value generation funnel.


Understanding Where Folks Are Coming From When They Say API Management Is Irrelevant

I am always fascinated when I get push back from people about API management, the authentication, service composition, logging, analysis, and billing layer on the world of APIs. I seem to be find more people who are skeptical that it is even necessary anymore, and that it is a relic of the past. When I first started coming across the people making these claims earlier this year I was confused, but as I’ve pondered on the subject more, I’m convinced their position is more about the world of venture capital, and investment in APIs, that it is about APIs.

People who feel like you do not need to measure the value being exchanged at the API layer aren’t considering the technology or business of delivering APIs. They are simply focused on the investment cycles that are occurring, and see API management as something that has been done, it is baked into the cloud, and really isn’t central to API-driven businesses. They perceive that the age of API management as being over, it is something the cloud giants are doing now, thus it isn’t needed. I feel like this is a symptom of tech startup culture being so closely aligned with investment cycles, and the road map being about investment size and opportunity, and rarely the actual delivery of the thing that brings value to companies, organizations, institutions, and government agencies.

I feel this perception is primarily rooted in the influence of investors, but it is also based upon a limited understanding of API management, and seeing APIs being a about delivering public APIs, maybe with a complimenting a SaaS offering, and a free, pro, and enterprise tiers of access. When in reality API management is about measuring, quantifying, reporting upon, and in some cases billing for the value that is exchanged at the system integration, web, mobile, device, and network application levels. However, to think API operators shouldn’t be measuring, quantifying, reporting, and generating revenue from the digital bits being exchanged behind ALL applications, just demonstrates a narrow view of the landscape.

It took me a few months to be able to see the evolution of API management from 2006 to 2016 through the eyes of an investment minded individual. Once the last round of consolidation occurred, Apigee IPO’d, and API management became baked into Amazon, Google, and Azure, it fell of the radar for these folks. It’s just not a thing anymore. This is just one example of how investment influences the startup road map, as well as the type of thinking that goes on amongst investor influence, painting an entirely different picture of the landscape, than what I see going on. Helping me understand more about where this narrative originates, and why it gets picked up and perpetuated within certain circles.

To counter this view of the landscape, from 2006 to 2016 I saw a very different picture. I didn’t just see the evolution of Mashery, Apigee, and 3Scale as acquisition targets, and cloud consolidation. I also saw the awareness that API management brings to the average API provider. Providing visibility into the pipes behind the web, mobile, device, and network applications we are depending on to do business. I’m seeing municipal, state, and federal government agencies waking up to the value of the data, content, and algorithms they possess, and the potential for generating much needed revenue off commercial access to these resources. I’m working with large enterprise groups to manage their APIs using 3Scale, Apigee, Kong, Tyk, Mulesoft, Axway, and AWS API Gateway solutions.

Do not worry. Authenticating, measuring, logging, reporting, and billing against the value flowing through our API pipes isn’t going anywhere. Yes it is baked into the cloud. After a decade of evolution, it definitely isn’t the early days of investing in API startups. But, API management is still a cornerstone of the API life cycle. I’d say that API definitions, design, and deployment are beginning to take some of the spotlight, but authentication, service composition, logging, metrics, analysis, and billing will remain an essential ingredient when it comes to delivering APIs of any shape or size. If you study this layer of the API economy long enough, you will even see some emerging investment opportunities at the API management layer, but you have to be looking through the right lens, otherwise you might miss some important details.


We Need You API Developers Until We Have Grown To A Certain Size

I’m watching several fronts along the API landscape evolve right now, with large API providers shifting, shutting down, and changing how they do business with their APIs–now that they don’t need their ecosystem of developers as much. It is easy to point the finger at Facebook, Twitter, Google, and the other giants, but it really is a wider systemic, business of APIs illness that will continue to negatively impact the API universe. While this behavior is very prominent with the leading API providers right now, it is something we’ll see various waves of it’s influence on the tone of the entire API sector further on down the road.

How API providers treat their consumers vary from industry to industry, and is different depending on the types of resources being made available. However, the issue of API providers treating their developers differently when they are just getting started versus once they grow to a certain size, is something that will continue to plague all areas of the API space. Companies change as they mature, and their priorities will no doubt evolve, but almost all will feel compelled to exploit developers early on so that they can grow their numbers–taking advantage of the goodwill and curiosity of the developer personality.

The polarization of the API management layer is difficult to predict from the outside. I want to help new API providers out early on, but after eight years of doing this, and seeing many API providers and service providers evolve, and go away–I am left very skeptical of EVERYONE. I think many developers in the API space are feeling the same, and are weary of kicking the tires on new APIs, unless they have a specific project and purpose. I think the days of developers working for free within an API ecosystem are over. It is a concept that will still exist in some form, but along with each wave of new API startups taking advantage of this reality, then tightening things down later on down the road, eventually developers will change their behavior in response.

The lesson is that if a API doesn’t have a clear business model early on–steer clear of their services. We can’t fault Twitter for working to monetize their platform now, and make their investors happy. We just should have seen it in the cards back in 2008. We can’t fault Facebook for working to please their shareholders, and protect their warehouse of inventory (Facebook Users) from malicious ecosystem players, we just should have seen this coming back in 2008. We can’t fault Google Maps for raising the prices on their digital assets developed by us giving them our data for the last decade, we should have know this would happen back in 2006. The business and politics of APIs is less straightforward than the technology of APIs is, and as we all know, the technology is a mess.

I will keep calling out the offenses of API providers when feel strongly enough, but I’ve ranted about Facebook, Twitter, and Google a lot in the past. I feel like we should be skeptical of ALL API providers at this point, and assume the worst about them all–new or old. Expect them to change the prices down the road. Expect them to turn off the most valuable resources once we’ve helped them grow and mature them into a prized digital asset. This is how some companies will continue to think they can make their investments grow, and all of as API providers and consumers need to remain skeptical about our role in this process. Always making sure we remember that most API providers will no longer need us developers once they have grown to a certain size.


The Redirect URL To Confirm Selling Your API In AWS Marketplace Provides Us With A Positive Template

I am setting up different APIs using the AWS API Gateway and then publishing them to the AWS Marketplace, as part of my work with Streamdata.io. I’m getting a feel for what the process is all about, and how small I can distill an API product to be, as part of the AWS Marketplace process. My goal is to be able to quickly define APIs using OpenAPI, then publish them to AWS API Gateway, and leverage the gateway to help me manage the entire business of the service from signup to discovery.

As I was adding one of my first couple of APIs to the AWS Marketplace, and I found the instructions regarding the redirect URL for each API to be a good template. Each individual API service I’m offering will have its own subscription confirmation URL with the AWS API marketplace, with the relevant variables present I will need to scale the technical and business of delivering my APIs:

  • AWS Marketplace Confirmation Redirect URL: https://YOUR_DEVELOPER_PORTAL_API_ID.execute-api.[REGION].amazonaws.com/prod/marketplace-confirm/[USAGE_PLAN_ID]

This URL is what my retail customers will be given after they click to subscribe to one of my APIs. Each API has its specific portal (present in URL), as well as being associated with a specific API usage plan within AWS API Gateway. Also notice that they have a variable for region, allowing me to deliver services by region, and scale up the technical side of delivering the various APIs I’m deploying–another important consideration when delivering reliable and performant services.

Pointing out the URL for a signup process might seem like a small thing. However, because of AWS’s first mover advantage in the cloud, and their experience as an API pioneer, I feel like they are in a unique position to be defining the business layer of delivering APIs at scale in the cloud. The business opportunities available at the API management and marketplace layers within the AWS ecosystem are untapped, and represent the next generation of API management that is baked into the cloud. It’s an interesting place to be delivering and integrating with APIs, at this stage of the API economy.


The Service Level Agreement (SLA) Definition For The OpenAPI Specification

I’m currently learning more about SLA4OAI, an open source standard for describing SLA in APIs, which is based on the standards proposed by the OAI, adding an optional profile for defining SLA (Service Level Agreements) for APIs. “This SLA definition in a neutral vendor flavor will allow to foster innovation in the area where APIs expose and documents its SLA, API Management tools can import and measure such key metrics and composed SLAs for composed services aggregated way in a standard way.” Providing not just a needed standard for the API sector, but more importantly one that is built on top of an existing standard.

SLA4OAI, provides an intersesting way to define the SLA for any API, providing a set of objects that augment and can be paired with an OpenAPI definition using an x-sla vendor extension:

  • Context - Holds the main information of the SLA context.
  • Infrastructure - Required Provides information about tooling used for SLA storage, calculation, governance, etc.
  • Pricing - Global pricing data.
  • Metrics - A list of metrics to use in the context of the SLA.
  • Plans - A set of plans to define different service levels per plan.
  • Quotas - Global quotas, these are the default quotas, but they could be overridden by each plan later.
  • Rates - Global rates, these are the default rates, but they could be overridden by each plan later.
  • Guarantees - Global guarantees, these are the default guarantees, but they could be overridden by each plan later.
  • Configuration - Define the default configurations, later each plan can be override it.

These objects provide all the details you will need to quantify the SLA for any OpenAPI defined API. In order to validate each of the SLAs, a separate Basic SLA Management Services is provided to implement the services that control, manage, report and track SLAs. Providing the reporting output you will need to understand whether or not each individual API is meeting its SLA. Providing a universal SLA format that can be used to create SLA templates, applied as individual API SLAs, and then leveraging a common schema for reporting on SLA monitoring, which can actually be used in conjunction with the API management layer of your API operations.

I’m still getting familiar with the specification, but I’m impressed with what I’ve seen so far. There is a lot of detail available in there, and it provides all the context that will be needed to quantify, measure, and report upon API SLAs. My only critique at this point is that I feel the pricing, metrics, plans, and quotas elements should be broken out into a separate specification so that they can be used out of the context of just SLA management, and as part of the wider API management strategy. There are plenty of scenarios where you will want to be using these elements, but not in the context of SLA enforcement (ie. driving pricing and plan pages, billing and invoicing, and API discovery). Other than than, I’m pretty impressed with the work present in the specification.

It makes me happy to see sister specifications emerge like this, rather than also baked directly into the OpenAPI. The way they’ve referenced the SLA from the OpenAPI definition, and the OpenAPI from the SLA definition is how these things should occur, in my opinion. It has been something I’ve done for years with the APIs.json format, having seen a future where there are many sister or even competing specifications all working in unison. Which is why I feel the pricing and plan elements should be decoupled from the SLA object here. However, I think this is a great start to something that we are going to need if we expect this whole API thing to actually work at scale, and make sure the technology of APIs is in sync with the business of APIs. Without it, there will always been a gulf between the technology and business units, much like we’ve seen historically between IT and business groups across enterprise organizations today.


Ping Identity Acquires ElasticBeam To Establish New API Security Solution

You don’t usually find me writing about API acquisitions unless I have a relationship with the company, or there are other interesting aspects of the acquisition that makes it noteworthy. This acquisition of Elastic Beam by Ping Identity has a little of both for me, as I’ve been working with the Elastic Beam team for over a year now, and I’ve been interested in what Ping Identity is up to because of some research I am doing around open banking in the UK, and the concept of an industry level API identity and access management, as well as API management layer. All of which makes for an interesting enough mix for me to want to quantify here on the blog and load up in my brain, and share with my readers.

From the press release, “Ping Identity, the leader in Identity Defined Security, today announced the acquisition of API cybersecurity provider Elastic Beam and the launch of PingIntelligence for APIs.” Which I think reflects some of the evolution of API security I’ve been seeing in the space, moving being just API management, and also being about security from the outside-in. The newly combined security solution, PingIntelligence for APIs, focuses in on automated API discovery, threat detection & blocking, API deception & honeypot, traffic visibility & reporting, and self-learning–merging the IAM, API management, and API security realms for me, into a single approach to addressing security that is focused on the world of APIs.

While I find this an interesting intersection for the world of APIs in general, where I’m really intrigued by the potential is when it comes to the pioneering open banking API efforts coming out of the UK, and the role Ping Identity has played. “Ping’s IAM solution suite, the Ping Identity Platform, will provide the hub for Open Banking, where all UK banks and financial services organizations, and third-party providers (TPPs) wanting to participate in the open banking ecosystem, will need to go through an enrollment and verification process before becoming trusted identities stored in a central Ping repository.” Which provides an industry level API management blueprint I think is worth tuning into.

Back in March, I wrote about the potential of the identity, access management, API management, and directory for open banking in the UK to be a blueprint for an industry level approach to securing APIs in an observable way. Where all the actors in an API ecosystem have to be registered and accessible in a transparent way through the neutral 3rd party directory, before they can provide or access APIs. In this case it is banking APIs, but the model could apply to any regulated industry, including the world of social media which I wrote about a couple months back as well after the Cambridge Analytics / Facebook shitshow. Bringing API management and security out into the open, making it more observable and accountable, which is the way it should be in my opinion–otherwise we are going to keep seeing the same games being played we’ve seen with high profile breaches like Equifax, and API management lapses like we see at Facebook.

This is why I find the Ping Identity acquisition of ElasticBeam interesting and noteworthy. The acquisition reflect the evolving world of API security, but also has real world applications as part of important models for how we need to be conducting API operations at scale. ElasticBeam is a partner of mine, and I’ve been talking with them and the Ping Identity team since the acquisition. I’ll keep talking with them about their road map, and I’ll keep understanding how they apply to the world of API management and security. I feel the acquisition reflects the movement in API security I’ve been wanting to see for a while, moving us beyond just authentication and API management, looking at API security through an external lens, exploring the potential of machine learning, but also not leaving everything we’ve learned so far behind.


Working To Keep Programming At Edges Of The API Conversation

I’m fascinated by the dominating power of programming languages. There are many ideological forces at play in the technology sector, but the dogma that exists within each programming language community continues to amaze me. The potential absence of programming language dogma within the world of APIs is one of the reasons I feel it has been successful, but alas, other forms of dogma tends to creep in around specific API approaches and philosophies, making API evangelism and adoption always a challenge.

The absence of programming languages in the API design, management, and testing discussion is why they have been so successful. People in these disciplines have ben focused on the language agnostic aspects of just doing business with APIs. It is also one of the reasons the API deployment conversation still is still so fragmented, with so many ways of getting things done. When it comes to API deployment, everyone likes to bring their programming language beliefs to the table, and let it affect how we actually deliver this API, and in my opinion, why API gateways have the potential to make a comeback, and even excel when it comes to playing the role of API intermediary, proxy, and gateway.

Programming language dogma is why many groups have so much trouble going API first. They see APIs as code, and have trouble transcending the constraints of their development environment. I’ve seen many web or HTTP APIs called Java API, Python APIs, or reflect a specific language style. It is hard for developers to transcend their primary programming language, and learn multiple languages, or think in a language agnostic way. It is not easy for us to think out of our boxes, and consider external views, and empathize with people who operate within other programming or platform dimensions. It is just easier to see the world through our lenses, making the world of APIs either illogical, or something we need to bend to our way of doing things.

I’m in the process of evolving from my PHP and Node.js realm to a Go reality. I’m not abandoning the PHP world because many of my government and institutional clients still operate in this world, and I’m using Node.js for a lot of serverless API stuff I’m doing. However I can’t ignore the Go buzz I keep coming stumbling upon. I also feel like it is time for a paradigm shift, forcing me out of my comfort zone and push me to think in a new language. This is something I like to do every five years, shifting my reality, keeping me on my toes, and forcing me to not get too comfortable. I find that this keeps me humble and thinking across programming languages, which is something that helps me realize the power of APIs, and how they transcend programming languages, and make data, content, algorithms, and other resources more accessible via the web.


People Still Think APIs Are About Giving Away Your Data For Free

After eight years of educating people about sensible API security and management, I’m always amazed at how many people I come across who still think public web APIs are about giving away access to your data, content, and algorithms for free. I regularly come across very smart people who say they’d be doing APIs, but they depend on revenue from selling their data and content, and wouldn’t benefit from just putting it online for everyone to download for free.

I wonder when we stopped thinking the web was not about giving everything away for free? It is something I’m going to have to investigate a little more. For me, it shows how much education we still have ahead of us when it comes to informing people about what APIs are, and how to properly manage them. Which is a problem, when many of the companies I’m talking to are most likely doing APIs to drive internal systems, and public mobile applications. They are either unaware of the APIs that already exist across their organization, or think that because they don’t have a public developer portal showcasing their APIs, that they are much more private and secure than if they were openly offering them to partners and the public.

Web API management has been around for over a decade now. Requiring ALL developers to authenticate when accessing any APIs, and the ability to put APIs into different access tiers, limit that the rate of consumption, while logging and billing for all API consumption isn’t anything new. Amazon has been extremely public about their AWS efforts, and the cloud isn’t a secret. The fact that smart business leaders see all of this and do not see that APIs are driving it all represents a disconnect amongst business leadership. It is something I’m going to be testing out a little bit more to see what levels of knowledge exist across many fortune 1000 companies, helping paint of picture of how they view the API landscape, and help me quantify their API literacy.

Educating business leaders about APIs has been a part of my mission since I started API Evangelist in 2010. It is something that will continue to be a focus of mine. This lack of awareness is why we end up with damaging incidents like the Equifax breach, and the Cambridge Analytica / Facebook scandal. Its how we end up with so many trolls on Twitter, and an out of balance API ecosystems across federal, state, and municipal governments. It is a problem that we need to address in the industry, and work to help educate business leaders around common patterns for securing and managing our API resources. I think this process always begins with education and API literacy, but is a symptom of the disconnect around storytelling about public vs private APIs, when in reality there are just APIs that are secured and managed properly, or not.


Making Connections At The API Management Layer

</pI've been evaluating API management providers, and this important stop along the API lifecycle in which they serve for eight years now. It is a space that I'm very familiar with, and have enjoyed watching it mature, evolve, and become something that is more standardized, and lately more commoditized. I've enjoyed watching the old guard (3Scale, Apigee, and Mashery) be acquired, and API management be baked into the cloud with AWS, Azure, and Google. I've also had fun learning about Kong, Tyk, and the next generation API management providers as they grow and evolve, as well as some of the older players like Axway as they work to retool so that they can compete and even lead the charge in the current environment. I am renewing efforts to study what each of the API management solutions provide, pushing forward my ongoing API management research, understanding what the current capacity of the active providers are, and potentially they are pushing forward the conversation. One of the things I'm extremely interested in learning more about is the connector, plugin, and extensibility opportunities that exist with each solution. Functionality that allows other 3rd party API service providers to inject their valuable services into the management layer of APIs, bringing other stops along the API lifecycle into management layer, allowing API providers to do more than just what their API management solution delivers. Turning the API management layer into much more than just authentication, service plan management, logging, analytics, and billing. Over the last year I've been working with [API security provider ElasticBeam](https://www.elasticbeam.com/) to help make sense of what is possible at the API management layer when it comes to securing our APIs. ElasticBeam can analyze the surface area of an API, as well as the DNS, web, API management, web server, and database logs for potential threats, and apply their machine learning models in real time. Without direct access at the API management layer, ElasticBeam is still valuable but cannot respond in real-time to threats, shutting down keys, blocking request, and other threats being leveraged against our API infrastructure. Sure, you can still respond after the fact based upon what ElasticBeam learns from scanning all of your logs, but without being able to connect directly into your API management layer, the effectiveness of their security solution is significantly diminished. Complimenting, but also contrasting ElasticBeam, I'm also working with [Streamdata.io](http://streamdata.io) to help understand how they can be injected at the API management layer, adding an event-driven architectural layer to any existing API. The first part of this would involve turning high volume APIs into real time streams using Server-Sent Events (SSE). With future advancements focused on topical streaming, webhooks, and WebSub enhancements to transform simple request and response APIs into event-driven streams of information that only push what has changed to subscribers. Like ElasticBeam, Streamdata.io would benefit being directly baked into the API management layer as a connector or plugin, augmenting the API management layer with a next generation event-driven layer that would compliment what any API management solution brings to the table. Without an extensible connector or plugin layer at the API management layer you can't inject additional services like security with ElasticBeam, or event-driven architecture like Streamdata.io. I'm going to be looking for this type of extensibility as I profile the features of all of the active API management providers. I'm looking to understand the core features each API management provider brings to the table, but I'm also looking to understand how modern these API management solutions are when it comes to seamlessly working with other stops along the API lifecycle, and specifically how these other stops can be serviced by other 3rd party providers. Similar to my regular rants about API service providers always having APIs, you are going to hear me rant more about API service providers needing to have connector, plugin, and other extensibility features. API management service providers can put their APIs to work driving this connector and plugin infrastructure, but it should allow for more seamless interaction and benefits for their customers, that are brought to the table by their most trusted partners.


Measuring Value Exchange Around Government Data Using API Management

I’ve written about how the startup community has driven the value exchange that occurs at the API management layer down a two lane road with API monetizatin and plans. To generate the value that investors have been looking for, we have ended up with a free, pro, enterprise approach to measuring value exchanged with API integration, when in reality there is so much more going on here. Something that really becomes evident when you begin evaluating the API conversations going on across government at all levels.

In conversation after conversation I have with government API folk I hear that they don’t need API management reporting, analysis, and billing features. There is a perception that government isn’t selling access to APIs, so API management measurement isn’t really needed. Missing out on a huge opportunity to be measuring, analyzing, and reporting upon API usage, and requiring a huge amount of re-education on my part to help API providers within government to understand how they need to be measuring value exchange at the API management level.

Honestly, I’d love to see government agencies actually invoicing API consumers at all levels for their usage. Even if the invoices aren’t in dollars. Measuring API usage, reporting, quantifying, and sending an invoice for usage each month would help bring awareness to how government digital resources are being put to use (or not). If all data, content, and algorithms in use across federal, state, and municipal government were available via APIs, then measured, rate limited, and reported upon across all internal, partner, and public consumers–the awareness around the value of government digital assets would increase significantly.

I’m not saying we should charge for all access to government data. I’m saying we should be measuring and reporting upon all access to government data. The APIs, and a modern API management layer is how you do this. We have the ability to measure the value exchanged and generated around government digital resources, and we shouldn’t let misconceptions around how the private sector measures and generates revenue at the API layer interfere with government agencies benefitting from this approach. If you are publishing an API at a government agency, I’d love to learn more about how you are managing access to your APIs, and how you are reporting upon the reading and writing of data across applications, and learn more about how you are sharing and communicating around these consumption reports.


API Gateway As A Just Node And Moving Beyond Backend and Frontend

The more I study where the API management, gateway, and proxy layer is headed, the less I’m seeing a front-end or a backend, I’m seeing just a node. A node that can connect to existing databases, or what is traditionally considered a backend system, but also can just as easily proxy and be a gateway to any existing API. A lot has been talked about when it comes to API gateways deploying APIs from an existing database. There has also been considerable discussion around proxying existing internal APIs or web services, so that we can deploy newer APIs. However, I don’t think there has been near as much discussion around proxying other 3rd party public APIs–which flips the backend and frontend discuss on its head for me.

After profiling the connector and plugin marketplaces for a handful of the leading API management providers I am seeing a number of API deployment opportunities for Twilio, Twitter, SendGrid, etc. Allowing API providers to publish API facades for commonly used public APIs, and obfuscate away the 3rd party provider, and make the API your own. Allowing providers to build a stack of APIs from existing backend systems, private APIs and services, as well as 3rd party public APIs. Shifting gateways and proxies from being API deployment and management gatekeepers for backend systems, to being nodes of connectivity for any system, service, and API that you can get your hand on. Changing how we think about designing, deploying, and managing APIs at scale.

I feel like this conversation is why API deployment is such a hard thing to discuss. It can mean so many different things, and be accomplished in so many ways. It can be driven by a databases, or strictly using code, or be just taking an existing API and turning it into something new. I don’t think it is something that is well understood amongst developer circles, let alone in the business world. An API gateway can be integration just as much as it can be about deployment. It can be both simultaneously. Further complexing what APIs are all about, but also making the concept of the API gateway a more powerful concept, continuing to renew this relic of our web services past, into something that will accommodate the future.

What I really like about this notion, is that it ensures we will all be API providers as well as consumers. The one-sided nature of API providers in recent years has always frustrated me. It has allowed people to stay isolated within their organizations, and not see the world from the API consumer perspective. While API gateways as a node won’t bring everyone out of their bubble, it will force some API providers to experience more pain than they have historically. Understanding what it takes to to not just provide APIs, but what it takes to do so in a more reliable, consistent and friendly way. If we all feel the pain our integration and have to depend on each other, the chances we will show a little more empathy will increase.


API Management In Real-Time Instead Of After The Media Shitstorm

I have been studying API management for eight years now. I’ve spent a lot of time understanding the approach of leading API providers, and the services and tools put out there by API service providers from 3Scale to Tyk, and how the cloud service providers like AWS are baking API management into their clouds. API management isn’t the most exciting aspect of doing APIs, but I feel it is one of the most important elements of doing APIs, delivering on the business and politics of doing APIs, which can often make or break a platform and the applications that depend on it.

Employing a common API management solution, and having a solid plan in place, takes time and investment. To do it properly takes lots of regular refinement, investment, and work. It is something that will often seem unnecessary–having to review applications, get to know developers, and consider how they fit into the picture picture. Making it something that can easily be to pushed aside for other tasks on a regular basis–until it is too late. This is frequently the case when your API access isn’t properly aligned with your business model, and there are no direct financial strings attached attached to new API users, or a line distinguishing active from inactive users, let alone governing what they can do with data, content, and algorithms that are made accessible via APIs.

The API management layer is where the disconnect between API providers and consumers occur. It is also where the connection, and meaningful engagement occurs when done right. In most cases API providers aren’t being malicious, they are just investing in the platform as their leadership has directed, and acting upon the stage their business model has set into motion. If your business model is advertising, then revenue is not directly connected to your API consumption, and you just turn on the API faucet to let in as many consumers as you can possibly attract. Your business model doesn’t require that you play gatekeeper, qualify, or get to know your API developers, or the applications they are developing. This is what we’ve seen with Facebook, Twitter, and other platforms who are experiencing challenges at the API management layer lately, due to a lack of management over the last 5+ years.

There was no incentive for Facebook or Twitter to review applications, audit their consumption, or get to know their usage patterns. The business model is an indirect one, so investment at the API management layer is not a priority. They only respond to situations once it begins to hurt their image, and potentially rise up to begin to hurt their bottom line. The problem with this type of behavior is that other API providers see this and think this is a problem with doing APIs, and do not see it as a problem with not doing API management. That having a business model which is sensibly connected to your API usage, and a team who is actively managing and engaging with your API consumers is how you deal with these challenges. If you manage your API properly, you are dealing with negative situations early on, as opposed to waiting until there is a media shitstorm before you start reigning in API consumption on your platform.

Sensible API management is something I’ve been writing about for eight years. It is something I will continue to write about for many more years, helping companies understand the importance of doing it correctly, and being proactive rather than reactive. I’ll also be regularly writing about the subject to help API consumers understand the dangers of using platforms that do not have a clear business model, as it is usually a sign of this API management disconnect I’m talking about. It is something that will ultimately cause problems down the road, and contribute to platform instability, and APIs potentially being limited or shuttered as part of these reactionary responses we are seeing from Facebook and Twitter currently. I know that my views of API management are out of sync with popular notions of how you incentivize exponential growth on a platform, but my views are in alignment with healthy API ecosystems, successful API consumers, and reliable API providers who are around for the long term.


Facebook And Twitter Only Now Beginning To Police Their API Applications

I’ve been reading about all the work Facebook and Twitter have been doing over the last couple of weeks to begin asserting more control over their API applications. I’m not talking about the deprecation of APIs, that is a separate post. I’m focusing on them reviewing applications that have access to their API, and shutting off access to the ones who are’t adding value to the platform and violating the terms of service. Doing the hard work to maintain a level of quality on the platform, which is something they should have been doing all along.

I don’t want to diminish the importance of the work they are doing, but it really is something that should have been done along the way–not just when something goes wrong. This kind of behavior really sets the wrong tone across the API sector, and people tend to focus on the thing that went wrong, rather than the best practices of what you should be doing to maintain quality across API operations. Other API providers will hesitate launching public APIs because they’ll not want to experience the same repercussions as Facebook and Twitter have, completely overlooking the fact that you can have public APIs, and maintain control along the way. Setting the wrong precedent for API providers to emulate, and damaging the overall reputation of operating public APIs.

Facebook and Twitter have both had the tools all along to police the applications using their APIs. The problem is the incentives to do so, and to prioritize these efforts isn’t there, due to an imbalance with their business model, and a lack of diversity in their leadership. When you have a bunch of white dudes with a libertarian ethos pushing a company towards profitability with a advertising driven business model, investing in quality control at the API management layer just isn’t a priority. You want as may applications, users, and activity as you possibly can, and when you don’t see the abuse, harassment, and other illnesses, there really is no problem from your vantage point. That is, until you get called out in the press, or are forced to testify in front of congress. The reasons us white dudes get away with this is that there are no repercussions, we just get to ignore until it becomes a problem, apologize, perform a little bit to show we care, and wait until the next problem occurs.

This is the wrong API model to put out there. API providers need to see the benefits of properly reviewing applications that want access to their APIs, and the value of setting a higher bar for how applications use the API. There should be regular reviews of active APIs, and audits of how they are accessing, storing, and putting resources to work. This isn’t easy work, or inexpensive to do properly. It isn’t something you can put off until you get in trouble. It is something that should be done from the beginning, and conducted regularly, as part of the operations of a well funded team. You can have public APIs for a platform, and avoid privacy, security, and other shit-shows. If you need an example of doing it well, look at Slack, who has a public API that is successful, even with a high level of bot automation, but somehow manages to stay out of the spotlight for doing dumb things. It is because their API management practices are in better alignment with their business model–the incentives are there.

For the next 3-5 years I’m going to have to hear from companies who aren’t doing public APIs, because they don’t want to make the same mistake as Facebook and Twitter. All because Facebook and Twitter have been able to get away with such bad behavior for so long, avoid doing the hard work of managing their API platforms, and receive so much bad press. All in the name of growth and profits at all cost. Now, I’m going to have to write a post every six months showcasing Facebook and Twitter as pioneers for how NOT to run your platforms, explaining the importance of healthy API management practices, and investing in your API teams so they have the resources to do it properly. I’d rather have positive role models to showcase rather than poorly behaved role models who I have to work overtime to change perception and alter API provider’s behavior. As an API community let’s learn from what has happened and invest properly in our API management layers, properly screen and get to know who is building application on our resources, and regularly tune into and audit their behavior. Yes, it takes more investment, time, and resources, but in the end we’ll all be better off for it.


Venture Capital Obfuscating The Opportunities For Value Exchange At The Api Management Level

404: Not Found


Nest Branding And Marketing Guidelines

I’m always looking out for examples of API providers who have invested energy into formalizing process around the business and politics of API operations. I’m hoping to aggregate a variety of approaches that I can aggregate into a single blueprint that I can use in my API storytelling and consulting. The more I can help API providers standardize what they do, the better off the API sector will be, so I’m always investing in the work that API providers should be doing, but doesn’t always get prioritized.

The other day while profiling the way that Nest uses Server-Sent Events (SSE) to stream activity via thermostats, cameras, and the other devices they provide, and I stumbled across their branding policies. It provides a pretty nice set of guidance for Nest developers in respect to the platform’s brand, and something you don’t see with many other API providers. I always say that branding is the biggest concern for new API providers, but also the one that is almost never addressed by API providers who are in operation–which doesn’t make a whole lot of sense to me. If I had a major corporate brand, I’d work to protect it, and help developers understand what is important.

The Nest marketing program is intended to qualify applications, and then allow them to use the “Works with Nest” branding in your marketing and social media. To get approved you have submit your product or service for review. As part of the review process verifies that you are in compliance with all of their branding policies, including:

To apply for the program you have to email them with all the following details regarding the marketing efforts our your product or service where you will be using the “Works with Nest” branding:

  • Description of your marketing program
  • Description of your intended audience
  • Planned communication (list all that apply): Print, Radio, TV, Digital Advertising, OOH, Event, Email, Website, Blog Post, Email, Social Post, Online Video, PR, Sales, Packaging, Spec Sheet, Direct Mail, App Store Submission, or other (please specify)
  • High resolution assets for the planned communication (please provide URL for file download); images should be at least 1000px wide
  • Planned geography (list all that apply): Global, US, Canada, UK, or other (please specify)
  • Estimated reach: 1k - 50k, 50k- 100k, 100k - 500k, or 500k+
  • Contact Information: First name, last name, email, company, product name, and phone number

Once submitted, Nest says they’ll provide feedback or approval of your request within 5 business days, and if all is well, they’ll approve your marketing plan for that Works with Nest product. If they find any submission is not in compliance with their branding policies, they’ll ask you to make corrections to your marketing, so you can submit again. I don’t have too many examples of marketing and branding submission process as part of my API branding research. I have the user interface guide, and trademark policy as building blocks, but the user experience guide, and the details of the submission process are all new entries to my research.

I feel like API providers should be able to defend the use of their brand. I do not feel API providers can penalize and cut-off API consumers access unless there are clear guidelines and expectations presented regarding what is healthy, and unhealthy behavior. If you are concerned about how your API consumers reflect your brand, then take the time to put together a program like Nest has. You can look at my wider API branding research if you need other examples, or I’m happy to dive into the subject more as part of a consulting arrangement, and help craft an API branding strategy for you. Just let me know.

Disclosure: I do not “Work with Nest” – I am just showcasing the process. ;-)


Tyk Is Conducting API Surgery Meetups

I was having one of my regular calls with the Tyk team as part of our partnership, discussing what they are up to these days. I’m always looking to understand their road map, and see where I can discover any stories to tell about what they are up to. A part of their strategy to build awarness around their API management solution that I found was interesting, was the API Surgery event they held in Singapore last month, where they brought together API providers, developers, and architects to learn more about how Tyk can help them out in their operations.

API surgery seems like an interesting evolution in the Meetup formula. They have a lot of the same elements as a regular Meetup like making sure there was pizza and drinks, but instead of presentations, they ask folks to bring their APIs along, and they walk them through setting up Tyk, and deliver an API management layer for their API operations. If they don’t have their own API, no problem. Tyk makes sure there are test APIs for them to use while learning about how things work. Helping them understand how to deliver API developer onboarding, documentation, authentication, rate limiting, monitoring, analytics, and the other features that Tyk delivers.

They had about 12 people show up to the event, with a handful of business users, as well as some student developers. They even got a couple of new clients from the event. It seems like a good way to not beat around the bush about what an API service provider is wanting from event attendees, and getting down to the business at hand, learning how to secure and manage your API. I think the Meetup format still works for API providers, and service providers looking to reach an audience, but I like hearing about evolutions in the concept, and doing things that might bring out a different type of audience, and cut out some of the same tactics we’ve seen play out over the last decade.

I could see Meetups like this working well at this scale. You don’t need to attract large audiences with this approach. You just need a handful of interested people, looking to learn about your solution, and understand how it solves a problem they have. Tyk doesn’t have to play games about why they are putting on the event, and people get the focus time with a single API service provider. Programming language meetups still make sense, but I think as the API sector continues to expand that API service provider, or even API provider focused gatherings can also make sense. I’m going to keep an eye on what Tyk is doing, and look for other examples of Meetups like this. It might reflect some positive changes out there on the landscape.

Disclosure: Tyk is an API Evangelist partner.


That Point Where API Session Management Become API Surveillance

I was talking to my friends TC2027 Computer and Information Security class at Tec de Monterrey via a Google hangout today, and one of the questions I got was around managing API sessions using JWT, which was spawned from a story about security JWT. A student was curious about managing session across API consumption, while addressing securing concerns, making sure tokens aren’t abused, and there isn’t API consumption from 3rd parties who shouldn’t have access going unnoticed.

I feel like there are two important, and often competing interests occurring here. We want to secure our API resources, making sure data isn’t leaked, and prevent breaches. We want to make sure we know who is accessing resources, and develop a heightened awareness regarding who is accessing what, and how they are putting them to use. However, the more we march down the road of managing session, logging, analyzing, tracking, and securing our APIs, we are also simultaneously ramping up the surveillance of our platforms, and the web, mobile, network, and device clients who are putting our resources to use. Sure, we want to secure things, but we also want to think about the opportunity for abuse, as we are working to manage abuse on our platforms.

To answer the question around how to track sessions across API operations I recommended thinking about that identification layer, which includes JWT and OAuth, depending on the situation. After that you should be looking other dimensions for identifying session like IP address, timestamps, user agent, and any other identifying characteristics. An app or user token is much more about identification, than it ever provides actual security, and to truly identify a valid session you should have more than one dimension beyond that key to acknowledge valid sessions, as well as just session in general. Identifying what healthy sessions look like, as well as unhealthy, or unique sessions that might be out of the realm of normal operations.

To accomplish all of this, I recommend implementing a modern API management solution, but also pulling in logging from all other layers including DNS, web server, database, and any other system in the stack. To be able to truly identify healthy and unhealthy sessions you need visibility, and synchronicity across all logging layers of the API stack. Does the API management logs reflect DNS, and web server, etc. This is where access tiers, rate limits, and overall consumption awareness really comes in, and having the right tools to lock things down, freeze keys and tokens, as well as being able to identify what healthy API consumption looks like, providing a blueprint for what API sessions should, or shouldn’t be occurring.

At this point in the conversation I also like to point out that we should be stopping and considering at what point all of this API authentication, security, logging, analysis, and reporting and session management becomes surveillance. Are we seeking API security because it is what we need, or just because it is what we do. I know we are defensive about our resources, and we should be going the distance to keep data private and secure, but at some point by collecting more data, and establishing more logging streams, we actually begin to work against ourselves. I’m not saying it isn’t worth it in some cases, I am just saying that we should be questioning our own motivations, and the potential for introducing more abuse, as we police, surveil, and secure our APIs from abuse.

As technologists, we aren’t always the best at stepping back from our work, and making sure we aren’t introducing new problems alongside our solutions. This is why I have my API surveillance research, alongside my API authentication, security, logging, and other management research. We tend to get excited about, and hyper focused on the tech for tech’s sake. The irony of this situation is that we can also introduce exploitation and abuse around our practices for addressing exploitation and abuse around our APIs. Let’s definitely keep having conversations around how we authenticate, secure, and log to make sure things are locked down, but let’s also make sure we are having sensible discussions around how we are surveilling our API consumers, and end users along the way.


The Concept Of API Management Has Expanded So Much the Concept Should Be Retired

API management was the first area of my research I started tracking on in 2010, and has been the seed for the 85+ areas of the API lifecycle I’m tracking on in 2017. It was a necessary vehicle for the API sector to move more mainstream, but in 2017 I’m feeling the concept is just too large, and the business of APIs has evolved enough that we should be focusing in on each aspect of API management on its own, and retire the concept entirely. I feel like at this point it will continue to confuse, and be abused, and that we can get more precise in what we are trying to accomplish, and better serve our customers along the way.

The main concepts of API management at play have historically been about authentication, service composition, logging, analytics, and billing. There are plenty of other elements that have often been lumped in there like portal, documentation, support, and other aspects, but securing, tracking, and generating revenue from a variety of APIs, and consumers has been center stage. I’d say that some of the positive aspects of the maturing and evolution of API manage include more of a focus on authentication, as well as the awareness introduced by logging and analytics. I’d say some areas that worry me is that security discussions often stop with API management, and we don’t seem to be having evolved conversations around service conversation, billing, and monetization of our API resources. You rarely see these things discussed when we talk about GraphQL, gRPC, evented architecture, data streaming, and other hot topics in the API sector.

I feel like the technology of APIs conversations have outpaced the business of APIs conversations as API management matured and moved forward. Advancements in logging, analytics, and reporting have definitely advanced, but understanding the value generated by providing different services to different consumers, seeing the cost associated with operations, and the value generated, then charging or even paying consumers involved in that value generation in real-time, seems to be being lost. We are getting better and the tech of making our digital bits more accessible, and moving them around, but we seemed to be losing the thread about quantifying the value, and associating revenue with it in real-time. I see this aspect of API management still occurring, I’m just not seeing the conversations around it move forward as fast as the other areas of API management.

API monetization and plans are two separate area of my research, and are something I’ll keep talking about. Alongside authentication, logging, analysis, and security. I think the reason we don’t hear more stories about API service composition and monetization is that a) companies see this as their secret sauce, and b) there aren’t service providers delivering in these areas exclusively, adding to the conversation. How to rate limit, craft API plans, set pricing at the service and tier levels are some of the most common questions I get. Partly because there isn’t enough conversation and resources to help people navigate, but also because there is insecurity, and skewed views of intellectual property and secret sauce. People in the API sector suck at sharing anything they view is their secret sauce, and with no service providers dedicated to API monetization, nobody is pumping the story machine (beyond me).

I’m feeling like I might be winding down my focus on API management, and focus in on the specific aspects of API management. I’ve been working on my API management guide over the summer, but I’m thinking I’ll abandon it. I might just focus on the specific aspects of conducting API management. IDK. Maybe I’ll still provide a 100K view for people, while introducing separate, much deeper looks at the elements that make up API management. I still have to worry about onboarding the folks who haven’t been around in the sector for the last ten years, and help them learn everything we all have learned along the way. I’m just feeling like the concept is a little dated, and is something that can start working against us in some of the conversations we are having about our API operations, where some important elements like security, and monetization can fall through the cracks.


The US Postal Service Wakes Up To The API Management Opportunity In New Audit

The Office Of Inspector General for US Postal Service published an audit report on the federal agencies API strategy, which has opened their eyes to the potential of API management, and the direct value it can bring to their customers, and their business. The USPS has some extremely high value APIs that are baked into ecommerce solutions around the country, and have even launched an API management solution recently, but until now have not been actively analyzing and using API usage to guide them in any of their business planning decisions.

According to the report, “The Postal Service captures customer API usage data and distributes it to stakeholders outside of the Web Tools team via spreadsheets every month. However, management is not using that data to plan for future API needs. This occurred because management did not agree on which group was responsible for reviewing and making decisions about captured usage data.” I’m sure this is common in other agencies, as APIs are often evolved within IT groups, that can have significant canyons between them and any business units. Data isn’t shared, unless a project specifically designates it to be shared, or leadership directs it, leaving real-time API management data out of reach of those business groups making decisions.

It is good to see another federal agency wake up to the potential of API management, and the awareness it can bring to business groups. It’s not just some technical implementation with logfiles, it is actual business intelligence that can be used to guide the agency forward, and help an agency better serve constituents (customers). The awareness introduced by doing APIs, and then properly managing APIs, analyzing usage, and building and understanding what is happening, is a journey. It’s a journey that not all federal agencies have even begun (sadly). It is important that other agencies follow USPS lead, because it is likely you are already gathering valuable data, and just passing it on to external partners like USPS has been doing, not capturing any of the value for yourself. Compounding the budget, and other business challenges you are already facing, when you could be using this data to make better informed decisions, or even more important, establishing new revenue streams from your valuable public sector resources.

While it may seem far fetched at the moment, but this API management layer reflects the future of government revenue and tax base. This is how companies in the private sector are generating revenue, and if commercial partners are building solutions on top of public sector data and other digital resources, these government agencies should be able to generate new revenue streams from these partnerships. This is how government works with physical public resources, there should be no difference when it comes to digital public resources. We just haven’t reached the realization that this is the future of how we make sure government is funded, and has the resources it needs to not just compete in the digital world, but actually innovate as many of us hope it will. It will take many years for federal agencies to get to this point. This is why they need to get started on their API journey, and begin managing their data assets in an organized way as the USPS is beginning to do.

API management has been around for a decade. It isn’t some new concept, and their are plenty of open source solutions available for federal agencies to put to use. All the major cloud platforms have it baked into their operations, making it a commodity, alongside compute, storage, DNS, and the other building blocks of our digital worlds. I’ll be looking for other ways to influence government leadership to light the API fire within federal agencies like the Office of the Inspector General has done at the U.S. Postal Service. It is important that agencies be developing awareness, and making business decisions from the APIs they offer, just like they are doing from their web properties. Something that will set the stage for future for how the government serves its constituents, customers, and generates the revenue it needs to keep operating, and even possibly leading in the digital evolution of the public sector.


Disaster API Rate Limit Considerations

This API operations consideration won’t apply to every API, but for APIs that provide essential resources in a time of need, I wanted to highlight an API rate limit cry for help that came across my desk this weekend. Our friend over at Pinboard alerted me to someone in Texas asking for some help in getting Google to increase the Google Maps API rate limits for an app they were depending on as Hurricane Harvey:

The app they depended on had ceased working and was showing a Google Maps API rate limit error, and they were trying to get the attention of Google to help increase usage limits. As Pinboard points out, it would be nice if Google had more direct support channels to make requests like this, but it would be also great if API providers were monitoring API usage, aware of applications serving geographic locations being impacted, and would relax API rate limiting on their own. There are many reasons API providers leverage their API management infrastructure to make rate limit exceptions and natural disasters seems like it should be top of the list.

I don’t think API providers are being malicious with rate limits in this area. I just think it is yet another area where technologists are blind to the way technology is making an impact (positive or negative) on the world around us. Staying in tune to the needs of applications that help people in their time of need seems like it will have to components, 1) knowing your applications (you should be doing this anyways) and identifying the ones that have a public service, and 2) staying in tune with natural and other disasters that are happening around the world. We see larger platforms like Facebook and Twitter rolling out permanent solutions to assist communities in their times of needs, and it seems like something that other smaller platforms should be tuning into as well.

Disaster support and considerations will be an area of API operations I’m going to consider adding into my research, and spending more time to identify best practices, and what platforms are doing to better serve communities in a time of need using APIs.


The Importance Of API Stories

I am an API storyteller before am an API architect, designer, or evangelist. My number one job is to tell stories about the API space. I make sure there is always (almost) 3-5 stories a day published to API Evangelist about what I’m seeing as I conduct my research on the sector, and thoughts I’m having while consulting and working on API projects. I’ve been telling stories like this for seven years, which has proven to me how much stories matter in the world of technology, and the worlds that it is impacting–which is pretty much everything right now.

Occasionally I get folks who like to criticize what I do, making sure I know that stories don’t matter. That nobody in the enterprise or startups care about stories. Results are what matter. Ohhhhh reeeaaaly. ;-) I hate to tell you, it is all stories. VC investment in startups is all about the story. The markets all operate on stories. Twitter. Facebook. LinkedIn. Medium. TechCrunch. It is all stories. The stories we tell ourselves. The stories we tell each other. The stories we believe. The stories we refuse to believe. It is all stories. Stories are important to everything.

The mythical story about Jeff Bezos’s mandate that all employees needed to use APIs internally is still 2-3% of my monthly traffic, down from 5-8% for the last couple of years, and it was written in 2012 (five years ago). I’ve seen this story on the home page of the GSA internal portal, and framed hanging on the wall in a bank in Amsterdam. Stories are important. Stories are still important when they aren’t true, or partially true, like the Amazon mythical tale is(n’t). Stories are how we make sense of all this abstract stuff, and turn it into relatable concepts that we can use within the constructs of our own worlds. Stories are how the cloud became a thing. Stories are why microservices and devops is becoming a thing. Stories are how GraphQL wants to be a thing.

For me, most importantly, telling stories is how I make sense of the world. If I can’t communicate something to you here on API Evangelist, it isn’t filed away in my mental filing cabinet. Telling stories is how I have made sense of the API space. If I can’t articulate a coherent story around API related technology, and it just doesn’t make sense to me, it probably won’t stick around in my storytelling, research, and consulting strategy. Stories are everything to me. If they aren’t to you, it’s probably because you are more on the receiving end of stories, and not really influencing those around you in your community, and workplace. Stories are important. Whether you want to admit it or not.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.