RSS

API Management News

These are the news items I've curated in my monitoring of the API space that are related to managing APIs and thought worth enough to include in my research. I'm using all of these links to better understand how APIs are being managed across a diverse range of implementations.

Tyk Is Conducting API Surgery Meetups

I was having one of my regular calls with the Tyk team as part of our partnership, discussing what they are up to these days. I’m always looking to understand their road map, and see where I can discover any stories to tell about what they are up to. A part of their strategy to build awarness around their API management solution that I found was interesting, was the API Surgery event they held in Singapore last month, where they brought together API providers, developers, and architects to learn more about how Tyk can help them out in their operations.

API surgery seems like an interesting evolution in the Meetup formula. They have a lot of the same elements as a regular Meetup like making sure there was pizza and drinks, but instead of presentations, they ask folks to bring their APIs along, and they walk them through setting up Tyk, and deliver an API management layer for their API operations. If they don’t have their own API, no problem. Tyk makes sure there are test APIs for them to use while learning about how things work. Helping them understand how to deliver API developer onboarding, documentation, authentication, rate limiting, monitoring, analytics, and the other features that Tyk delivers.

They had about 12 people show up to the event, with a handful of business users, as well as some student developers. They even got a couple of new clients from the event. It seems like a good way to not beat around the bush about what an API service provider is wanting from event attendees, and getting down to the business at hand, learning how to secure and manage your API. I think the Meetup format still works for API providers, and service providers looking to reach an audience, but I like hearing about evolutions in the concept, and doing things that might bring out a different type of audience, and cut out some of the same tactics we’ve seen play out over the last decade.

I could see Meetups like this working well at this scale. You don’t need to attract large audiences with this approach. You just need a handful of interested people, looking to learn about your solution, and understand how it solves a problem they have. Tyk doesn’t have to play games about why they are putting on the event, and people get the focus time with a single API service provider. Programming language meetups still make sense, but I think as the API sector continues to expand that API service provider, or even API provider focused gatherings can also make sense. I’m going to keep an eye on what Tyk is doing, and look for other examples of Meetups like this. It might reflect some positive changes out there on the landscape.

Disclosure: Tyk is an API Evangelist partner.


That Point Where API Session Management Become API Surveillance

I was talking to my friends TC2027 Computer and Information Security class at Tec de Monterrey via a Google hangout today, and one of the questions I got was around managing API sessions using JWT, which was spawned from a story about security JWT. A student was curious about managing session across API consumption, while addressing securing concerns, making sure tokens aren’t abused, and there isn’t API consumption from 3rd parties who shouldn’t have access going unnoticed.

I feel like there are two important, and often competing interests occurring here. We want to secure our API resources, making sure data isn’t leaked, and prevent breaches. We want to make sure we know who is accessing resources, and develop a heightened awareness regarding who is accessing what, and how they are putting them to use. However, the more we march down the road of managing session, logging, analyzing, tracking, and securing our APIs, we are also simultaneously ramping up the surveillance of our platforms, and the web, mobile, network, and device clients who are putting our resources to use. Sure, we want to secure things, but we also want to think about the opportunity for abuse, as we are working to manage abuse on our platforms.

To answer the question around how to track sessions across API operations I recommended thinking about that identification layer, which includes JWT and OAuth, depending on the situation. After that you should be looking other dimensions for identifying session like IP address, timestamps, user agent, and any other identifying characteristics. An app or user token is much more about identification, than it ever provides actual security, and to truly identify a valid session you should have more than one dimension beyond that key to acknowledge valid sessions, as well as just session in general. Identifying what healthy sessions look like, as well as unhealthy, or unique sessions that might be out of the realm of normal operations.

To accomplish all of this, I recommend implementing a modern API management solution, but also pulling in logging from all other layers including DNS, web server, database, and any other system in the stack. To be able to truly identify healthy and unhealthy sessions you need visibility, and synchronicity across all logging layers of the API stack. Does the API management logs reflect DNS, and web server, etc. This is where access tiers, rate limits, and overall consumption awareness really comes in, and having the right tools to lock things down, freeze keys and tokens, as well as being able to identify what healthy API consumption looks like, providing a blueprint for what API sessions should, or shouldn’t be occurring.

At this point in the conversation I also like to point out that we should be stopping and considering at what point all of this API authentication, security, logging, analysis, and reporting and session management becomes surveillance. Are we seeking API security because it is what we need, or just because it is what we do. I know we are defensive about our resources, and we should be going the distance to keep data private and secure, but at some point by collecting more data, and establishing more logging streams, we actually begin to work against ourselves. I’m not saying it isn’t worth it in some cases, I am just saying that we should be questioning our own motivations, and the potential for introducing more abuse, as we police, surveil, and secure our APIs from abuse.

As technologists, we aren’t always the best at stepping back from our work, and making sure we aren’t introducing new problems alongside our solutions. This is why I have my API surveillance research, alongside my API authentication, security, logging, and other management research. We tend to get excited about, and hyper focused on the tech for tech’s sake. The irony of this situation is that we can also introduce exploitation and abuse around our practices for addressing exploitation and abuse around our APIs. Let’s definitely keep having conversations around how we authenticate, secure, and log to make sure things are locked down, but let’s also make sure we are having sensible discussions around how we are surveilling our API consumers, and end users along the way.


The Concept Of API Management Has Expanded So Much the Concept Should Be Retired

API management was the first area of my research I started tracking on in 2010, and has been the seed for the 85+ areas of the API lifecycle I’m tracking on in 2017. It was a necessary vehicle for the API sector to move more mainstream, but in 2017 I’m feeling the concept is just too large, and the business of APIs has evolved enough that we should be focusing in on each aspect of API management on its own, and retire the concept entirely. I feel like at this point it will continue to confuse, and be abused, and that we can get more precise in what we are trying to accomplish, and better serve our customers along the way.

The main concepts of API management at play have historically been about authentication, service composition, logging, analytics, and billing. There are plenty of other elements that have often been lumped in there like portal, documentation, support, and other aspects, but securing, tracking, and generating revenue from a variety of APIs, and consumers has been center stage. I’d say that some of the positive aspects of the maturing and evolution of API manage include more of a focus on authentication, as well as the awareness introduced by logging and analytics. I’d say some areas that worry me is that security discussions often stop with API management, and we don’t seem to be having evolved conversations around service conversation, billing, and monetization of our API resources. You rarely see these things discussed when we talk about GraphQL, gRPC, evented architecture, data streaming, and other hot topics in the API sector.

I feel like the technology of APIs conversations have outpaced the business of APIs conversations as API management matured and moved forward. Advancements in logging, analytics, and reporting have definitely advanced, but understanding the value generated by providing different services to different consumers, seeing the cost associated with operations, and the value generated, then charging or even paying consumers involved in that value generation in real-time, seems to be being lost. We are getting better and the tech of making our digital bits more accessible, and moving them around, but we seemed to be losing the thread about quantifying the value, and associating revenue with it in real-time. I see this aspect of API management still occurring, I’m just not seeing the conversations around it move forward as fast as the other areas of API management.

API monetization and plans are two separate area of my research, and are something I’ll keep talking about. Alongside authentication, logging, analysis, and security. I think the reason we don’t hear more stories about API service composition and monetization is that a) companies see this as their secret sauce, and b) there aren’t service providers delivering in these areas exclusively, adding to the conversation. How to rate limit, craft API plans, set pricing at the service and tier levels are some of the most common questions I get. Partly because there isn’t enough conversation and resources to help people navigate, but also because there is insecurity, and skewed views of intellectual property and secret sauce. People in the API sector suck at sharing anything they view is their secret sauce, and with no service providers dedicated to API monetization, nobody is pumping the story machine (beyond me).

I’m feeling like I might be winding down my focus on API management, and focus in on the specific aspects of API management. I’ve been working on my API management guide over the summer, but I’m thinking I’ll abandon it. I might just focus on the specific aspects of conducting API management. IDK. Maybe I’ll still provide a 100K view for people, while introducing separate, much deeper looks at the elements that make up API management. I still have to worry about onboarding the folks who haven’t been around in the sector for the last ten years, and help them learn everything we all have learned along the way. I’m just feeling like the concept is a little dated, and is something that can start working against us in some of the conversations we are having about our API operations, where some important elements like security, and monetization can fall through the cracks.


The US Postal Service Wakes Up To The API Management Opportunity In New Audit

The Office Of Inspector General for US Postal Service published an audit report on the federal agencies API strategy, which has opened their eyes to the potential of API management, and the direct value it can bring to their customers, and their business. The USPS has some extremely high value APIs that are baked into ecommerce solutions around the country, and have even launched an API management solution recently, but until now have not been actively analyzing and using API usage to guide them in any of their business planning decisions.

According to the report, “The Postal Service captures customer API usage data and distributes it to stakeholders outside of the Web Tools team via spreadsheets every month. However, management is not using that data to plan for future API needs. This occurred because management did not agree on which group was responsible for reviewing and making decisions about captured usage data.” I’m sure this is common in other agencies, as APIs are often evolved within IT groups, that can have significant canyons between them and any business units. Data isn’t shared, unless a project specifically designates it to be shared, or leadership directs it, leaving real-time API management data out of reach of those business groups making decisions.

It is good to see another federal agency wake up to the potential of API management, and the awareness it can bring to business groups. It’s not just some technical implementation with logfiles, it is actual business intelligence that can be used to guide the agency forward, and help an agency better serve constituents (customers). The awareness introduced by doing APIs, and then properly managing APIs, analyzing usage, and building and understanding what is happening, is a journey. It’s a journey that not all federal agencies have even begun (sadly). It is important that other agencies follow USPS lead, because it is likely you are already gathering valuable data, and just passing it on to external partners like USPS has been doing, not capturing any of the value for yourself. Compounding the budget, and other business challenges you are already facing, when you could be using this data to make better informed decisions, or even more important, establishing new revenue streams from your valuable public sector resources.

While it may seem far fetched at the moment, but this API management layer reflects the future of government revenue and tax base. This is how companies in the private sector are generating revenue, and if commercial partners are building solutions on top of public sector data and other digital resources, these government agencies should be able to generate new revenue streams from these partnerships. This is how government works with physical public resources, there should be no difference when it comes to digital public resources. We just haven’t reached the realization that this is the future of how we make sure government is funded, and has the resources it needs to not just compete in the digital world, but actually innovate as many of us hope it will. It will take many years for federal agencies to get to this point. This is why they need to get started on their API journey, and begin managing their data assets in an organized way as the USPS is beginning to do.

API management has been around for a decade. It isn’t some new concept, and their are plenty of open source solutions available for federal agencies to put to use. All the major cloud platforms have it baked into their operations, making it a commodity, alongside compute, storage, DNS, and the other building blocks of our digital worlds. I’ll be looking for other ways to influence government leadership to light the API fire within federal agencies like the Office of the Inspector General has done at the U.S. Postal Service. It is important that agencies be developing awareness, and making business decisions from the APIs they offer, just like they are doing from their web properties. Something that will set the stage for future for how the government serves its constituents, customers, and generates the revenue it needs to keep operating, and even possibly leading in the digital evolution of the public sector.


Disaster API Rate Limit Considerations

This API operations consideration won’t apply to every API, but for APIs that provide essential resources in a time of need, I wanted to highlight an API rate limit cry for help that came across my desk this weekend. Our friend over at Pinboard alerted me to someone in Texas asking for some help in getting Google to increase the Google Maps API rate limits for an app they were depending on as Hurricane Harvey:

The app they depended on had ceased working and was showing a Google Maps API rate limit error, and they were trying to get the attention of Google to help increase usage limits. As Pinboard points out, it would be nice if Google had more direct support channels to make requests like this, but it would be also great if API providers were monitoring API usage, aware of applications serving geographic locations being impacted, and would relax API rate limiting on their own. There are many reasons API providers leverage their API management infrastructure to make rate limit exceptions and natural disasters seems like it should be top of the list.

I don’t think API providers are being malicious with rate limits in this area. I just think it is yet another area where technologists are blind to the way technology is making an impact (positive or negative) on the world around us. Staying in tune to the needs of applications that help people in their time of need seems like it will have to components, 1) knowing your applications (you should be doing this anyways) and identifying the ones that have a public service, and 2) staying in tune with natural and other disasters that are happening around the world. We see larger platforms like Facebook and Twitter rolling out permanent solutions to assist communities in their times of needs, and it seems like something that other smaller platforms should be tuning into as well.

Disaster support and considerations will be an area of API operations I’m going to consider adding into my research, and spending more time to identify best practices, and what platforms are doing to better serve communities in a time of need using APIs.


The Importance Of API Stories

I am an API storyteller before am an API architect, designer, or evangelist. My number one job is to tell stories about the API space. I make sure there is always (almost) 3-5 stories a day published to API Evangelist about what I’m seeing as I conduct my research on the sector, and thoughts I’m having while consulting and working on API projects. I’ve been telling stories like this for seven years, which has proven to me how much stories matter in the world of technology, and the worlds that it is impacting–which is pretty much everything right now.

Occasionally I get folks who like to criticize what I do, making sure I know that stories don’t matter. That nobody in the enterprise or startups care about stories. Results are what matter. Ohhhhh reeeaaaly. ;-) I hate to tell you, it is all stories. VC investment in startups is all about the story. The markets all operate on stories. Twitter. Facebook. LinkedIn. Medium. TechCrunch. It is all stories. The stories we tell ourselves. The stories we tell each other. The stories we believe. The stories we refuse to believe. It is all stories. Stories are important to everything.

The mythical story about Jeff Bezos’s mandate that all employees needed to use APIs internally is still 2-3% of my monthly traffic, down from 5-8% for the last couple of years, and it was written in 2012 (five years ago). I’ve seen this story on the home page of the GSA internal portal, and framed hanging on the wall in a bank in Amsterdam. Stories are important. Stories are still important when they aren’t true, or partially true, like the Amazon mythical tale is(n’t). Stories are how we make sense of all this abstract stuff, and turn it into relatable concepts that we can use within the constructs of our own worlds. Stories are how the cloud became a thing. Stories are why microservices and devops is becoming a thing. Stories are how GraphQL wants to be a thing.

For me, most importantly, telling stories is how I make sense of the world. If I can’t communicate something to you here on API Evangelist, it isn’t filed away in my mental filing cabinet. Telling stories is how I have made sense of the API space. If I can’t articulate a coherent story around API related technology, and it just doesn’t make sense to me, it probably won’t stick around in my storytelling, research, and consulting strategy. Stories are everything to me. If they aren’t to you, it’s probably because you are more on the receiving end of stories, and not really influencing those around you in your community, and workplace. Stories are important. Whether you want to admit it or not.


Which Platforms Have Control Over The Conversation Around Their Bots

I spend a lot of time monitoring API platforms, thinking about different ways of identifying which ones are taking control of the conversation around how their platforms operate. One example of this out in the wild can be found when it comes to bots, by doing a quick look at which of the major bot platforms own the conversation around this automation going on via their platforms.

First you take a look at Twitter, by doing a quick Google search for Twitter Bots:

Then you take a look at Facebook, by doing a quick Google search for Facebook Bots:

Finally take a look at Slack, by doing a quick Google search for Slack Bots:

It is pretty clear who owns the conversation when it comes to bots on their platform. While Twitter and Facebook both have information and guidance about doing bots they do not own the conversation like Slack does. Something that is reflected in the search engine placement. It is also something that sets the tone of the conversation that is going on within the community, and defines the types of bots that will emerge on the platform.

As I’ve said before, if you have a web or mobile property online today, you need to be owning the conversation around your API or someone eventually will own it for you. The same comes to automation around your platform, and the introduction of bots, and automated users, traffic, advertising, and other aspects of doing business online today. Honestly, I wouldn’t want to be in the business of running a platform these days. It is why I work so hard to dominate and own my own presence, just so that I can beat back what is said about me, and own the conversation on at least Google, Twitter, LinkedIn, Facebook, and Github.

Seems like to me, if you are going to enable automation on your platform via APIs, it should be something that you own completely, and make sure you provide some pretty strong guidance and direction.


The ElasticSearch Security APIs

I was looking at the set of security APIs over at Elasticsearch as I was diving into my API security research recently. I thought the areas they provide security APIs for the search platform was worth noting and including in not just my API security research, but also search, deployment, and probably overlap with my authentication research.

  • Authenticate API - The Authenticate API enables you to submit a request with a basic auth header to authenticate a user and retrieve information about the authenticated user.
  • Clear Cache API - The Clear Cache API evicts users from the user cache. You can completely clear the cache or evict specific users.
  • User Management APIs - The user API enables you to create, read, update, and delete users from the native realm. These users are commonly referred to as native users.
  • Role Management APIs - The Roles API enables you to add, remove, and retrieve roles in the native realm. To use this API, you must have at least the manage_security cluster privilege.
  • Role Mapping APIs - The Role Mapping API enables you to add, remove, and retrieve role-mappings. To use this API, you must have at least the manage_security cluster privilege.
  • Privilege APIs - The has_privileges API allows you to determine whether the logged in user has a specified list of privileges.
  • Token Management APIs - The token API enables you to create and invalidate bearer tokens for access without requiring basic authentication. The get token API takes the same parameters as a typical OAuth 2.0 token API except for the use of a JSON request body.

Come to think of it, I’ll add this to my API management research as well. Much of this overlaps with what should be a common set of API management services as well. Like much of my research, there are many different dimensions to my API security research. I’m looking to see how API providers are securing their APIs, as well as how service providers are selling security services to APIs providers. I’m also keen on aggregating common API design patterns for security APIs, and quantity how they overlap with other stops along the API lifecycle.

While the cache API is pretty closely aligned with delivering a search API, I think all of these APIs provide a potential building block to think about when you are deploying any API, and represents the Venn diagram that is API authentication, management, and security. I’m going through the rest of the Elasticsearch platform looking for interesting approaches to ensuring their search solutions are secure. I don’t feel like there are any search specific characteristics of API security that I will need to include in my final API security industry guide, but Elasticsearch’s approach has re-enforced some of the existing security building blocks I already had on my list.


When You See API Rate Limiting As Security

I’m neck deep into my assessment of the world of API security this week, a process which always yields plenty of random thoughts, which end up becoming stories here on the blog. One aspect of API security I keep coming across in this research is the concept of API rate limiting as being security. This is something I’ve long attributed with API management service providers making their mark on the API landscape, but as I dig deeper I think there is more to this notion of what API security is (or isn’t). I think it has more to do with API providers, than companies selling their warez to these API providers.

The API management service providers have definitely set the tone for API security conversation(good), by standing up a gateway, and providing tools for limiting what access is available–I think many data, content, and algorithmic stewards are very narrowly focus on security being ONLY about limiting access to their valuable resources. Many folks I come across see their resources as valuable, when they begin doing APIs they have a significant amount of concern around putting their resources on the Internet, and once you secure and begin rate limiting things, all security concerns appear to have been dealt with. Competitors, and others just can’t get at your valuable resources, they have to come through the gate–API security done.

Many API providers I encounter have unrealistic views of the value of their data, content, and algorithms, and when you match this with their unrealistic views about how much others want access to this valuable content you end up with a vacuum which allows for some very narrow views of what API security is. To help support this type of thinking, I feel like the awareness generated from API management is often focused on generating revenue, and not always about understanding API abuse, and is also something can create blindspots when it comes to database, server, and DNS level logging and layers where security threats emerge. I’m assuming folks often feel comfortable that the API management layer is sufficiently securing things by rate limiting, and we can see all traffic through the analytics dashboard. I’m feeling that this one of the reasons folks aren’t looking up at the bigger API security picture.

From what I’m seeing, assumptions that the API management layer is securing things can leave blind spots in other areas like DNS, threat information gathering, aggregation, collaboration, and sharing. I’ve come across API providers who are focused in on API management, but don’t have visibility at the database, server, container, and web server logging levels, and are only paying attention to what their API management dashboard provides access to. I feel like API management opened up a new found awareness for API provides, something that has evolved and spread to API monitoring, API testing, and API performance. I feel like the next wave of awareness will be in the area of API security. I’m just trying to explore ways that I can help my readers and clients better understand how to expand their vision of API security beyond their current field of vision.


We Have A Hostile CEO Which Requires A Shift In Our API Strategy

As I work my way through almost one hundred federal government API developer portals, almost 500 APIs, and 133 Github accounts for federal agencies the chilling effect of the change of leadership in this country becomes clear. You can tell the momentum across hundreds of federal agency built up over the last five years is still moving, but the silence across blogs, Twitter accounts, change logs, and Github repos shows that the pace of acceleration is in jeopardy.

When you are browsing agency developer portals you come across phrases like this, “As part of the Open Government Initiative, the BusinessUSA codebase is available on the BusinessUSA GitHub Open Source Repository.” With the link to the Open Government Initiative leading to a a page on the White House website that has been removed–something you can easily find on the Obama archives. I am coming across numerous examples like this of how the change in leadership has created a vacuum when it comes to API and open data leadership, at a time when we should be doubling down on sharing of data, content, and putting algorithms to work across the federal government.

After several days immersed in federal government developer areas it is clear we have a hostile CEO that will require us to shift in our API strategy. After six months it is clear that the current leadership has no interest transparency, observability, or even the efficiency in government that is achieved from focusing opening up data via public, but secure APIs. This doesn’t mean the end of our open data and API efforts, it just means we lose the top down leadership we’ve enjoyed for the last eight years when it came to technology in government, and efforts will have to shift to a more bottom up approach, with agencies and departments often setting their own agenda.

This is nothing new, and it won’t be the last time we face this working with APIs across the federal government. Even during times where we have full support of leaders we should always be on the look out for threats, either technical, business, or political. Across once active API efforts I’m regularly finding broken links to previous leadership documents and resources at the executive level. We need to make sure that we shift these resources to a more federated approach in the future, where we reference central resources, but keep a cached copy locally to allow for any future loss of leadership. This is one reason we should be emphasizing the usage of Github across agencies, which offloads the storage and maintenance of materials to each individual agency, group, or even at the project level.

It is easy to find yourself frustrated in the current environment being cultivated by the leadership at the moment. However, with the right planning and communication we should be able to work around, and develop API implementations that are resilient to change, whether they are technical, budgetary, or on the leadership front as we are dealing with now. Don’t give up hope. If you need someone to talk with about your project please feel free to reach out publicly or privately. There are many folks still working hard on APIs inside and outside the federal government firewall, and they need our help. If you find yourself abandoning a project, please try to make sure as much of the work is available on your agencies Github repository, including code, definitions, and any documentation. This is the best way to ensure your work will continue to live on. Thank you for your service.


API Management Across All Government Agencies

This isn’t a new drum beat for me, but is one I wanted to pick it up again as part of the federal government research and speaking I’m doing this month. It is regarding the management of APIs across federal government. In short, helping agencies successfully secure, meter, analyze, and develop awareness of who is using government API resources. API management is a commodity in the private technology sector, and is something that has been gaining momentum in government circles, but we have a lot more work ahead to get things where we need them.

The folks over at 18F have done a great job of helping bake API management into government APIs using API Umbrella, resulting in these twelve federal agencies:

This doesn’t just mean that each of these agencies are managing their APIs. It also means that all of these agencies are managing their APIs in a consistent way, using a consistent tool. Something that is allowing these agencies to effectively manage:

I know that both 18F and USDS are working are hard on this, but this is an area we need agencies to step up in, as well as the private sector. We need any vendor doing API deployment projects for any agency to work together to make sure their agency is using a standardized approach. This means that vendors should make the investment when it comes to reaching out to the GSA, and 18F to make sure you are up to speed on what is needed to leverage the work already in motion at api.data.gov.

Doing API management in a consistent way across ALL federal government APIs is super critical to all of this scaling as we all envision. The federal government possess a wealth of valuable data and content that can benefit the private sector. This isn’t just about making the federal government more transparent and observable, this is also about making these valuable resources available in a usable, sustainable way to the private sector–industries will be better off for it. I’m happy to see the progress these twelve agencies have made when it comes to API management, but we need to get to work helping every other agency play catch up, making it something that is baked into ALL API deployment projects by default.


Requiring ALL Platform Partners Use The API So There Is A Registered Application

I wrote a story about Twitter allowing users to check or uncheck a box regarding sharing data with select Twitter partners. While I am happy to see this move from Twitter, I feel the concept of information sharing being simply being a checkbox is unacceptable. I wanted to make sure I praised Twitter in my last post, but I’d like to expand upon what I’d like to see from Twitter, as well as ALL other platforms that I depend on in my personal and professional life.

There is no reason that EVERY platform we depend on couldn’t require ALL partners to use their API, resulting in every single application of our data be registered as an official OAuth application. The technology is out there, and there is no reason it can’t be the default mode for operations. There just hasn’t been the need amongst platform providers, as as no significant demand from platform users. Even if you don’t get full access to delete and adjust the details of the integration and partnership, I’d still like to see companies, share as many details as they possibly can regarding any partner sharing relationships that involve my data.

OAuth is not the answer to all of the problems on this front, but it is the best solution we have right now, and we need to have more talk about how we can make it is more intuitive, informative, and usable by the average end-users, as well as 3rd party developers, and platform operators. API plus OAuth is the lowest cost, widely adopted, standards based approach to establishing a pipeline for ALL data, content, and algorithms operate within that gives a platform the access and control they desire, while opening up access to 3rd party integrators and application developers, and most importantly, it gives a voice to end-users–we just need to continue discussing how we can keep amplifying this voice.

To the folks who will DM, email, and Tweet at me after this story. I know it’s unrealistic and the platforms will never do business like this, but it is a future we could work towards. I want EVERY online service that I depend on to have an API. I want all of them to provide OAuth infrastructure to govern identify and access management for personally identifiable information. I want ALL platform partners to be required to use a platforms API, and register an application for any user who they are accessing data on behalf. I want all internal platform projects to also be registered as an application in my OAuth management area. Crazy talk? Well, Google does it for (most of) their internal applications, why can’t others? Platform apps, partner apps, and 3rd party apps all side by side.

The fact that this post will be viewed as crazy talk by most who work in the technology space demonstrates the imbalance that exists. The technology exists for doing this. Doing this would improve privacy and security. The only reason we do not do it is because the platforms, their partners and ivnestors are too worried about being this observable across operations. There is no reason why APIs plus OAuth application can’t be universal across ALL platforms online, with ALL partners being required to access personally identifiable information through an API, with end-uses at least involved in the conversaiton, if not given full control over whether or not personally identifiable information is shared, or not.


Learning More About Amazon Alexas Approach To Apis And Skills Development

404: Not Found


Making An Account Activity API The Default

I was reading an informative post about the Twitter Account Activity API, which seems like something that should be the default for ALL platforms. In today’s cyber insecure environment, we should have the option to subscribe to a handful of events regarding our account or be able to sign up for a service that can subscribe and help us make sense of our account activity.

An account activity API should be the default for ALL the platforms we depend on. There should be a wealth of certified aggregate activity services that can help us audit and understand what is going on with our platform account activity. We should be able to look at, understand, and react to the good and bad activity via our accounts. If there are applications doing things that don’t make sense, we should be able to suspend access, until more is understood.

The Twitter Account Activity API Callback request contains three level of details:

  • direct_message_events: An array of Direct Message Event objects.
  • users: An object containing hydrated user objects keyed by user ID.
  • apps: An object containing hydrated application objects keyed by app ID.

The Twitter Account Activity API provides a nice blueprint other API providers can follow when thinking about their own solution. While the schema returned will vary between providers, it seems like the API definition, and the webhook driven process can be standardized and shared across providers.

The Twitter Account Activity API is in beta, but I will keep an eye on it. Now that I have the concept in my head, I’ll also look for this type of API available on other platforms. It is one of those ideas I think will be sticky, and if I can kick up enough dust, maybe other API providers will consider. I would love to have this level of control over my accounts, and it is also good to see Twitter still rolling out new APIs like this.


Making An Account Activity API The Default

I was reading an informative post about the Twitter Account Activity API, which seems like something that should be the default for ALL platforms. In today’s cyber insecure environment, we should have the option to subscribe to a handful of events regarding our account or be able to sign up for a service that can subscribe and help us make sense of our account activity.

An account activity API should be the default for ALL the platforms we depend on. There should be a wealth of certified aggregate activity services that can help us audit and understand what is going on with our platform account activity. We should be able to look at, understand, and react to the good and bad activity via our accounts. If there are applications doing things that don’t make sense, we should be able to suspend access, until more is understood.

The Twitter Account Activity API Callback request contains three level of details:

  • direct_message_events: An array of Direct Message Event objects.
  • users: An object containing hydrated user objects keyed by user ID.
  • apps: An object containing hydrated application objects keyed by app ID.

The Twitter Account Activity API provides a nice blueprint other API providers can follow when thinking about their own solution. While the schema returned will vary between providers, it seems like the API definition, and the webhook driven process can be standardized and shared across providers.

The Twitter Account Activity API is in beta, but I will keep an eye on it. Now that I have the concept in my head, I’ll also look for this type of API available on other platforms. It is one of those ideas I think will be sticky, and if I can kick up enough dust, maybe other API providers will consider. I would love to have this level of control over my accounts, and it is also good to see Twitter still rolling out new APIs like this.


I Would Like To See More API Test Drives

The Azure Marketplace has the ability to test drive anything that is deployed in the Azure Marketplace. As someone who has to sign up for an endless number of new accounts to be able to play with APIs and API services, I’m a big fan of the concept of a test drive–not just for web applications, or backend infrastructure, but specifically for individual APIs and microservices.

From the Azure site: Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned - you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product.

I like it. I want more of these. I want to be able to test drive, then deploy any API I want. I don’t want to sign up for an account, enter my credit card details, talk to sales, or signup for 30 day trial–I want to test drive. I want it to have data in it, and be pre-configured for a variety of use cases. Helping me understand what is possible.

I want all the friction between me finding an API (discovery via marketplace) understanding what an API does, test driving, then deployment of the API in any cloud I want. I think we are still a little bit off from this being as frictionless as I envision in my head, but I hope with enough nudging we will get there very soon.


Temporary Interaction Limits

I spend a lot of time thinking about API rate limits. How they can hurt API providers, or as my friend Tyler Singletary (@harmophone) says incentivize creativity. I think your view on rate limits will vary depending on which side of the limit you stand, as well as your own creative potential and limitations. I agree with Tyler that they can incentivize creativity, but it doesn’t mean that all limitations imposed will ultimately be good, or all creativity will be good.

I found myself contemplating Github’s recent introduction of temporary interaction limits which means “maintainers can temporarily limit who can comment, create pull requests, and open issues among existing users, collaborators, and prior contributors.” While this isn’t directly about API rate limiting, it does overlap, and provide us with some thoughts we can apply to our world of API consumption, and how we sensibly moderate the access to the digital resources we are making available online.

When it comes to real-time fetishism around the digital world those with the loudest bullhorn often get heard and think real-time is good, while I am becoming less convinced that anything gets done in a 24-hour time frame. Despite what many want you to believe, real-time does not always mean good. Sometimes it might do you some good to chill out for 24 hours before you continue commenting, posting, or increase your consumption of a digital resource, whether you want to admit it or not.

Our digital overlords have convinced us that more is better and real time is always ideal. Temporary interaction limits may not be the right answer in all situations, but it does give us another example of rate limiting by a major provider that we can consider and follow when it comes to crafting limitations around our digital resources. This is what rate limitations are all about for me, thoughtful consideration about how much of a good thing you will need each second, minute, day, week, or month. It is a great way to turn a quality digital resource into something better or possibly maintain the quality and value of a seemingly infinite resource by imposing just a handful of limitations.


Craft Your API Design Guide So You Can Move To Other Areas of The Lifecycle

I am working on an API definition and design guide for my human services API work, helping establish a framework for approaching API design as part of the human services data and API specification, but also for implementers to follow in their own individual deployments. Every time I work on the subject of API design, I’m reminded of how far behind the API sector is when it comes to standardizing what it is we do.

Every month or so I see a new company publicly share their API design guide. When they do my friend Arnaud always adds to his API Stylebook, adding it to the wealth of information available in his work. I’m happy to see each API design guide release, but in reality, ALL API providers should have an API design guide, and they should also be open to publishing it publicly, showing their consumers they have their act together, and sharing with the wider API community the best practices in play.

The lack of companies sharing their API design practices and their API definitions is why we have such a deficiency when it comes to common API patterns in use. It is why we have so many variations of web APIs, as well as the underlying schema. We have an API industry because early practitioners like SalesForce, Amazon, eBay, Flickr, Delicious, Twitter, Youtube, and others were open with their API operations. People emulate what they see and know. Each wave of the API sector depends on the previous wave sharing what they do publicly–it is how this all works.

To demonstrate even further about how deficient we are, I do not find companies sharing their guides for API deployment, management, testing, monitoring, clients, and other stops along the API lifecycle. I’m glad we are seeing an uptick in the number of API design guides, but we need this practice to spread to every other stop. We need successful providers to share how they deploy their APIs, and when any company hires a new developer, you should ALWAYS be given a standard guide for deploying, managing, testing, as well as designing APIs.

It’s not rocket science, and honestly, it’s not even technical. It just means pausing for a moment, thinking about how we approach each stop in the API lifecycle, writing up an overview, publishing, and sharing it with API stakeholders, and even the wider API community. Every company doing APIs in 2017 should be crafting an API design guide so you can get to work on guides for the other areas of your lifecycle, thinking through and standardizing your approach, and making it known to every person involved–ideally, you are also being very public about all of this, and sharing your work with me and Arnaud, so we can get the word out about the good stuff you are up to! ;-)


Taxation On Public Data Via The API Management Layer

I'm involved in some very interesting conversations with public data folks who are trying to push forward the conversation around sensible revenue generation by cities, counties, state, and the federal government using public data. I'm learning a lot from these conversations, resulting in the expansion and evolution my perceptions of how the API layer can help the government develop new revenue streams through making public data more accessible. 

I have long been a proponent of using modern API management infrastructure to help government agencies generate revenue using public data. I would also add that I'm supportive of the crafting of sensible approaches to developing applications on top of public data and API in ways that generate a fair profit for private sector actors. I am also in favor of free and unfettered access to data, and observability into the platform operations, as well as ALL commercial interests developing applications on top of public data and APIs. I'm only in favor of this, when the right amount of observability is present--otherwise digital good ol boy networks form, and the public will lose.

API management is the oldest area of my API research, expanding into my other work to eventually define documentation, SDKs, communication, support, monetization, and API plans. This is where you define the business of API operations, organizing APIs into coherent catalogs, where you can then work to begin establishing a wider monetization strategy, as well as tiers and plans that govern access to data, content, and algorithms being made available via APIs. This is the layer of API operations I'm focusing on when helping government agencies better understand how they can get more in tune with their data resources, and identify potential partnerships and other applications that might establish new revenue streams.

A portion of this conversation that I am having was involved in the story from Anthony Williams about maybe government data shouldn't always be free, where the topic of taxation came up. One possible analogy for public data access and monetization was brought up as a reference to the Vehicle-miles Traveled (VMT) tax, injecting the concept of taxation to my existing thoughts on revenue generation using API management. I've considered affiliate and reseller aspects to the API management layer before, applying percentage based revenue and payments on top of API access, but never thought about a government taxation layer existing here.

I thought my stance on revenue generation on public data using API management was controversial before, adding in concepts of taxation to the discussion is really going to invigorate folks who are in opposition to my argument. I'm sure there is a libertarian free web, open data advocate, smaller government Venn diagram in there somewhere. I'm not too concerned, as the monetization is already going on, I'm simply talking about making it more observable, and in favor of revenue generation for data stewards and government agencies. I'm confident that most won't folks in opposition won't even read this paragraph, as it's buried in the middle of this post. ;-)

I take no stance on which data, content, or algorithms should be taxed, or what that tax rate should be. I leave this to data stewards and policy makers. My objective is to just introduce folks to the concept, and marry with the existing approaches to using APIs to develop digital products and services in the private sector. However, if I was wearing my policy maker hat I would suggest thinking about this as a digital VAT tax, "that is collected incrementally, based on the surplus value, added to the price on the work at each stage of production, which is usually implemented as a destination-based tax, where the tax rate is based on the location of the customer."

My thoughts on a government tax at the API management layer are at an early stage. I am just exploring the concept on my blog--this is what I do as the API Evangelist. I'd love to hear your thoughts, on your blog. I am merely suggesting a digital VAT tax at the API contract layer around public data and APIs when commercial activities are involved. Eventually, I could see the concept spread to other sectors as the API economy becomes a reality, but I feel that public data provides us with a rich test bed for a concept like this. I'm considering reframing my argument about charging for commercial access to public data using APIs as taxing commercial usage of public data using APIs, allowing for tax revenue to fund future investment in public data and API efforts.

As I remove my API Evangelist hat and think about this concept, I'm not 100% sure if I'm in agreement with my argument. It will take a lot more polishing before I'm convinced that taxation should be included in the API management layer. I'll keep exploring, and play with a variety of potential use cases, and see if I can build a case for API taxation when public data is involved, and applications are generating surplus value in the API economy. 


API Rate Limiting At The DNS Layer

I just got an email from my DNS provider CloudFlare about rate limiting and protecting my APIs. I am a big fan of CloudFlare, partly because I am a customer, and I use to manage my own infrastructure, but also partly due to the way they understand APIs, and actively use them as part of their business, products, and services.

Their email spans a couple areas of my research that I find interesting, and extremely relevant: 1) DNS, 2) Security, 3) Management. They are offering me something that is traditionally done at the API management layer (rate limiting), but now offering to do it for me at the DNS layer, expanding the value of API rate limiting into the realm of security, and specifically in defense against DDoS attacks--a serious concern.

Talk about an easy way to add value to my world as an API provider. One that is frictionless, because I'm already depending on them for the DNS layer of my web, and API layers of operations. All I have to do is signup for the new service, and begin dialing it in for my all of my APIs, which span multiple domains--all conveniently managed using CloudFlare.

Another valuable thing CloudFlare's approach does, in my opinion, is to reintroduce the concept of rate limiting to the world of websites. This helps me in my argument that companies, organizations, institutions and government agencies should be considering having APIs to alleviate website scraping. Using CloudFlare they can now rate limit the website while pointing legitimate use cases to the API where their access can be measured, metered, and even monetized when it makes sense.

I'm hoping that CloudFlare will be exposing all of these services via their API, so that I can automate the configuration of rate limiting for my APIs at the DNS level using APIs. As I design and deploy new API endpoints I want them automatically protected at the DNS layer using CloudFlare. I don't want to have to do extra work when it comes to securing and managing web or API access. I just want a baseline for all of my operations, and when I need I can customize per specific domains, or down to the individual API path level--the rest is automated as part of my continuous integration workflows.


How I Can Help Make Sure Your API Is Ready For Use

As one of my clients is preparing to move their API from deployment to management, I'm helping them think through what is necessary to make sure their API is ready for use by a wider, more public group of developers. Ideally, I am brought into the discussion earlier on in the lifecycle, to influence design and deployment decisions, but I'm happy to be included at any time during the process. This is a generalized, and anonymized version of what I'm proposing to my client, to help make sure their API is ready for prime time--I just wanted to share with you a little of what goes on behind the scenes at API Evangelist, even when my clients aren't quite ready to talk publicly.

External Developer Number One
The first place I can help with the release of your API is when it comes to being the first external developer and fresh pair of eyes on your API. I can help with signing up, and actually making calls against every API, to make sure things are coherent and stable before putting in the hands of 3rd party developers at a hackathon, amongst partners, or the general public. This is a pre-requisite for me when it comes to writing a story on any API or doing additional consulting work, as it puts me in tune with what an API actual does, or doesn't do. The process will help provide you with a new perspective on the project after you have put so much work into the platform--in my case, it is a fresh pair of eyes that have on-boarded with 1000s of APIs.

Crafting Your API Developer Portal
Your operations will need a single place to go to engage with everything API. I always recommend deploying API portals using Github Pages, because it becomes the first are to engage with developers on Github, as part of your overall API efforts. Github is the easiest way to design, develop, deploy, and manage the API portal for your API efforts. I suggest focusing on all of the essential building blocks that any API operations should possess:

  • Landing Page
    • Tag Line - A short tagline describing what is possible using your API.
    • Description - A quick description (single paragraph) about what is available.
  • On-boarding
    • Signup Process - A link to the sign-up process for getting involved (OpenID).
    • Getting Started - A simple description, and numbered list of what it takes to get started.
    • Authentication Overview - A page dedicated to how authentication works.
    • FAQ - A listing of frequently asked questions broken down into categories, for easy browsing.
  • Integration
    • Documentation - Interactive documentation generated by the swagger / OpenAPI definition.
    • Code - Conde sample, or software development kits for developers to put to work.
    • Postman Collection - A Postman Collection + Button for loading up APIs into Postman client.
  • Support
    • Github - Set up Github account, establish profile, and setup portal as the first point of support.
    • Twitter - Set up a Twitter account, establish a presence, and make know open for business.
    • Email - Establish a single, shared email account that ca provide support for all developers.
  • Communications
    • Blog - Establish a blog using Jekyll (easy with Github Pages), and begin telling stories of the platform.
    • Twitter - Get the Twitter account in sync with communication, as well as support efforts.
  • Updates
    • Roadmap - Using Github issues, establish a label, and rhythm for sharing out the platform roadmap.
    • Issues - Using Github issues, establish a label, and rhythm for sharing out current issues with the platform.
    • Change Log - Using Github issues, establish a label and rhythm for sharing out changes made to the platform.
    • Status - Publish a monitoring and status page keeping users in tune with the platform stability and availability.
  • Legal
    • Terms of Service - Establish the terms of service for your platform.
    • Privacy Policy - Establish the privacy policy for your platform.

All of these building blocks have been aggregated from across thousands of APIs, and are something ALL successful API providers possess. I recommend starting here. You will need this as a baseline to get up and running with developers, whether generally on the web or through specific hackathons and your private engagements. Being developer number one, and helping craft, deploy, and polish the resources available via a coherent developer portal are what I bring to the table, and willing to set aside time to help you make happen.

Additionally, I'm happy to set into motion some other discussions regarding pushing forward your API operations:

  • Discovery - Establish a base discovery plan for the portal, including SEO, APIs.json development
  • Validation - Validate each API endpoint, and establish JSON assertions as part of the OpenAPI & testing.
  • Testing - Establish a testing strategy for not just monitoring all API endpoints, but make sure they return valid data.
  • Security - Think about security beyond just identity and API keys, and begin scanning API endpoints, and looking for vulnerabilities.
  • Embeddable - Pull together a strategy for embeddable tooling including buttons, badges, and widgets.
  • Webhooks - Consider how to develop a webhook strategy allowing information to be pushed out to developers, reducing calls to APIs.
  • iPaaS - Think through how to develop an iPaaS strategy to allow for integration with platforms like Zapier, empowering developers and non-developers alike.

This is how I am helping companies make sure their APIs are ready for prime time. I regularly encounter many teams who have great APIs but have been too close to the ovens baking the bread, and find it difficult to go from development to management in a coherent way. I have on-boarded and hacked on thousands of APIs. I have been doing this for over a decade, and exclusively as a full-time job for the last seven years. I am your ideal first developer and can save you significant amounts of time when it comes to crafting and deploying your API portal.

As a one person operation, I don't have the time to do this for every company that approaches me, but I'm happy to engage with almost everyone who reaches out, to understand how I can best help. Who knows, I might help prevent you from making some pretty common mistakes, and I am happy to be a safer, early beta user of your APIs--one tha will give you the feedback you are looking for.


Open Source Drag And Drop API Lifecycle Design Tooling

I'm always on the hunt for new ways to define, design, deploy, and manage API infrastructure, and thought the AWS Cloud Formation Designer provides a nice look at where things might be headed. AWS CloudFormation Designer (Designer) is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates, which translates pretty nicely to managing your API infrastructure as well.

While the AWS Cloud Formation Designer spans all AWS services, all the elements are there for managing all the core stops along the API life cycle liked definition, design, DNS, deployment, management, monitoring, and others. Each of the Amazon services is available with a listing of each element available for the service, complete with all the inputs and outputs as connectors on the icons. Since all the AWS services are APIs, it's basically a drag and drop interface for mapping out how you use these APIs to define, design, deploy and manage your API infrastructure.

Using the design tool you can create templates for governing the deployment and management of API infrastructure by your team, partners, and other customers. This approach to defining the API life cycle is the closest I've seen to what stimulated my API subway map work, which became the subject of my keynotes at APIStrat in Austin, TX. It allows API architects and providers to templatize their approaches to delivering API infrastructure, in a way that is plug and play, and evolvable using the underlying JSON or YAML templates--right alongside the OpenAPI templates, we are crafting for each individual API.

The AWS Cloud Formation Designer is a drag and drop UI for the entire AWS API stack. It is something that could easily be applied to Google's API stack, Microsoft, or any other stack you define--something that could easily be done using APIs.json, developing another layer of templating for which resource types are available in the designer, as well as the formation templates generated by the design tool itself. There should be an open source "API formation designer" available, that could span cloud providers, allowing architects to define which resources are available in their toolbox--that anyone could fork and run in their environment.

I like where AWS is headed with their Cloud Formation Designer. It's another approach to providing full lifecycle tooling for use in the API space. It almost reminds me of Yahoo Pipes for the AWS Cloud, which triggers awkward feels for me. I'm hoping it is a glimpse of what's to come, and someone steps up with an even more attractive drag and drop version, that helps folks work with API-driven infrastructure no matter where it runs--maybe Google will get to work on something. They seem to be real big on supporting infrastructure that runs in any cloud environment. *wink wink*


There Is More To This Than Just Having An API

There is a reason why I encourage API providers to look at not just the technology of APIs but also invest heavily into the business and politics of API operations. There is a reason I evangelize a more open, web-based approach to doing APIs, even if you are peddling hardware and device APIs. It is because there are a number of human-centered elements present when doing APIs, that will define your services, and ultimately contribute to whether or not they are a success or a failure.

One example of this from my API news curation archives is from the Sonos API ecosystem, and a pretty big blunder in communication the audio device platform made late last year, that is significantly impacting their partnerships in 2017.  Directly from the CEPro article:

A collective cheer roared from home-technology installers at CEDIA Expo 2016, when Sonos announced an API for home-automation integration starting with Control4 (Nasdaq: CTRL), CrestroniPortLutron and Savant.

These partners – and most other respectable smart-home systems providers – have integrated with Sonos for many years, albeit with unsanctioned drivers created through reverse-engineering of a fairly straightforward UPnP-based protocol.

But the new API kind of snuck up on dealers and vendors alike, with their customers waking up to a brand new Sonos experience in late December, courtesy of an auto-update by Sonos.

The new experience was inferior to the original, with users unable to access Spotify or Amazon Music from the home automation system, except to select favorites created through Sonos’s own app.

When you are operating an API that many different businesses depend on, communication is essential. this is why I advocate that API providers always have a clear communication and support strategy, as well as the road map, issue management, and change log processes. Every single change has to be considered for its impact on the community, and you have to have a plan for how you will be communicating and supporting your API consumers needs around a change. 

This is also why API providers should be understanding the benefits of hypermedia when it comes to change management. Hypermedia design patterns provide you with a more honest approach to dealing change, one that helps make your partner's clients more fault tolerant. It is well worth the time learning about the handful of leading hypermedia media types. Any one of them would have helped Sonos manage change.

There are multiple tools in the API toolbox to help you manage change. In the en,d the most effective tools involve human to human interaction, and actually talking to your partners early on about change, and making sure you have a robust communication strategy throughout your API lifecycle. Us engineers like to think it is the API technology making the magic happen, but in the end, there is more to this than just having application programming interfaces, it is about also having the right human interfaces.


The AWS Serverless API Portal

I was looking through the Github accounts for Amazon Web Services and came across their Serverless API Portal--a pretty functional example of a forkable developer portal for your API, running on a variety of AWS services. It's a pretty interesting implementation because in addition to the tech of your API management it also helps you with the business side of things. 

The AWS Serverless Developer Portal "is a reference implementation for a developer portal application that allows users to register, discover, and subscribe to your API Products (API Gateway Usage Plans), manage their API Keys, and view their usage metrics for your APIs..[]..it also supports subscription/unsubscription through a SaaS product offering through the AWS Marketplace."--providing a pretty compelling API portal solution running on AWS.

There are a couple things I think are pretty noteworthy:

  • Application Backend (/lambdas/backend) - The application backend is a Lambda function built on the aws-serverless-express library. The backend is responsible for login/registration, API subscription/unsubscription, usage metrics, and handling product subscription redirects from AWS Marketplace.
  • Marketplace SaaS Setup Instructions - You can sell your SaaS product through AWS Marketplace and have the developer portal manage the subscription/unsubscription workflows. API Gateway will automatically provide authorization and metering for your product and subscribers will be automatically billed through AWS Marketplace
  • AWS Marketplace SNS Listener Function (Optional) (/listener) - The listener Lambda function will be triggered when customers subscribe or unsubscribe to your product through the AWS Marketplace console. AWS Marketplace will generate a unique SNS Topic where events will be published for your product.

This is the required infrastructure we'll need to get to what I've been talking about for some time with my wholesale API and virtual API stack stories. Amazon is providing you with the infrastructure you need to set up the storefront for your APIs, providing the management layer you will need, including monetization via their marketplace. This is a retail layer, but because your infrastructure is setup in this way, there is no reason you can't sell all or part of your setup to other wholesale customers, using the same AWS marketplace.

I had AWS marketplace on my list of solutions to better understand for some time now, but the AWS Serverless Developer Portal really begins to connect the dots for me. If you can sell access to your API infrastructure using this model, you can also sell your API infrastructure to others using this model. I will have to set up some infrastructure using this approach to better flush out how AWS infrastructure and open templates like this serverless developer portal can help facilitate a more versatile, virtualized, and wholesale API lifecycle. 

There is a more detailed walkthrough of how to get going with the AWS Serverless Developer Portal, helping you think through the details. I am a big fan of these types of templates--forkable Github repositories, with a blueprint you can follow to achieve a specific API deployment, management, or any other lifecycle objective.


An API Discovery API For Your API With Tyk

If you are selling services to the API space you should have an API, it is just how this game works (if you are savvy). I was going through Tyk's API for their open source API management solution and came across their API definitions API, which gives you a list of APIs for each Tyk deployment--baking in API discovery into the open source API management solution by default.

The API API (I still enjoy saying that) gives you the authentication, paths, versioning, and other details about each API being managed. I'm writing about this because I think that an API API should be the default for all API service providers. If you are selling me API services you should have an API for all your services, especially one that allows me to discover and manage all the APIs I'm applying your service to. 

I am expanding my definition of a minimum viable blueprint for API service providers, and adding an API API as one of the default APIs. I'm going to be adding the account, billing, and a handful of other essential APIs to my default definition. If I'm using your service to manage any part of my API operations, I need to be automating discovery, management, and billing in our relationship.

It seems obvious to me but I'm looking to provide a simple checklist that other API service providers can consider as they craft their strategy. My goal is to help make sure each stop along the lifecycle can be orchestrated in a programmatic way like Tyk.

Disclosure: Tyk is an API Evangelist partner.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.