RSS

API Management News

These are the news items I've curated in my monitoring of the API space that are related to managing APIs and thought worth enough to include in my research. I'm using all of these links to better understand how APIs are being managed across a diverse range of implementations.

The Service Level Agreement (SLA) Definition For The OpenAPI Specification

I’m currently learning more about SLA4OAI, an open source standard for describing SLA in APIs, which is based on the standards proposed by the OAI, adding an optional profile for defining SLA (Service Level Agreements) for APIs. “This SLA definition in a neutral vendor flavor will allow to foster innovation in the area where APIs expose and documents its SLA, API Management tools can import and measure such key metrics and composed SLAs for composed services aggregated way in a standard way.” Providing not just a needed standard for the API sector, but more importantly one that is built on top of an existing standard.

SLA4OAI, provides an intersesting way to define the SLA for any API, providing a set of objects that augment and can be paired with an OpenAPI definition using an x-sla vendor extension:

  • Context - Holds the main information of the SLA context.
  • Infrastructure - Required Provides information about tooling used for SLA storage, calculation, governance, etc.
  • Pricing - Global pricing data.
  • Metrics - A list of metrics to use in the context of the SLA.
  • Plans - A set of plans to define different service levels per plan.
  • Quotas - Global quotas, these are the default quotas, but they could be overridden by each plan later.
  • Rates - Global rates, these are the default rates, but they could be overridden by each plan later.
  • Guarantees - Global guarantees, these are the default guarantees, but they could be overridden by each plan later.
  • Configuration - Define the default configurations, later each plan can be override it.

These objects provide all the details you will need to quantify the SLA for any OpenAPI defined API. In order to validate each of the SLAs, a separate Basic SLA Management Services is provided to implement the services that control, manage, report and track SLAs. Providing the reporting output you will need to understand whether or not each individual API is meeting its SLA. Providing a universal SLA format that can be used to create SLA templates, applied as individual API SLAs, and then leveraging a common schema for reporting on SLA monitoring, which can actually be used in conjunction with the API management layer of your API operations.

I’m still getting familiar with the specification, but I’m impressed with what I’ve seen so far. There is a lot of detail available in there, and it provides all the context that will be needed to quantify, measure, and report upon API SLAs. My only critique at this point is that I feel the pricing, metrics, plans, and quotas elements should be broken out into a separate specification so that they can be used out of the context of just SLA management, and as part of the wider API management strategy. There are plenty of scenarios where you will want to be using these elements, but not in the context of SLA enforcement (ie. driving pricing and plan pages, billing and invoicing, and API discovery). Other than than, I’m pretty impressed with the work present in the specification.

It makes me happy to see sister specifications emerge like this, rather than also baked directly into the OpenAPI. The way they’ve referenced the SLA from the OpenAPI definition, and the OpenAPI from the SLA definition is how these things should occur, in my opinion. It has been something I’ve done for years with the APIs.json format, having seen a future where there are many sister or even competing specifications all working in unison. Which is why I feel the pricing and plan elements should be decoupled from the SLA object here. However, I think this is a great start to something that we are going to need if we expect this whole API thing to actually work at scale, and make sure the technology of APIs is in sync with the business of APIs. Without it, there will always been a gulf between the technology and business units, much like we’ve seen historically between IT and business groups across enterprise organizations today.


Ping Identity Acquires ElasticBeam To Establish New API Security Solution

You don’t usually find me writing about API acquisitions unless I have a relationship with the company, or there are other interesting aspects of the acquisition that makes it noteworthy. This acquisition of Elastic Beam by Ping Identity has a little of both for me, as I’ve been working with the Elastic Beam team for over a year now, and I’ve been interested in what Ping Identity is up to because of some research I am doing around open banking in the UK, and the concept of an industry level API identity and access management, as well as API management layer. All of which makes for an interesting enough mix for me to want to quantify here on the blog and load up in my brain, and share with my readers.

From the press release, “Ping Identity, the leader in Identity Defined Security, today announced the acquisition of API cybersecurity provider Elastic Beam and the launch of PingIntelligence for APIs.” Which I think reflects some of the evolution of API security I’ve been seeing in the space, moving being just API management, and also being about security from the outside-in. The newly combined security solution, PingIntelligence for APIs, focuses in on automated API discovery, threat detection & blocking, API deception & honeypot, traffic visibility & reporting, and self-learning–merging the IAM, API management, and API security realms for me, into a single approach to addressing security that is focused on the world of APIs.

While I find this an interesting intersection for the world of APIs in general, where I’m really intrigued by the potential is when it comes to the pioneering open banking API efforts coming out of the UK, and the role Ping Identity has played. “Ping’s IAM solution suite, the Ping Identity Platform, will provide the hub for Open Banking, where all UK banks and financial services organizations, and third-party providers (TPPs) wanting to participate in the open banking ecosystem, will need to go through an enrollment and verification process before becoming trusted identities stored in a central Ping repository.” Which provides an industry level API management blueprint I think is worth tuning into.

Back in March, I wrote about the potential of the identity, access management, API management, and directory for open banking in the UK to be a blueprint for an industry level approach to securing APIs in an observable way. Where all the actors in an API ecosystem have to be registered and accessible in a transparent way through the neutral 3rd party directory, before they can provide or access APIs. In this case it is banking APIs, but the model could apply to any regulated industry, including the world of social media which I wrote about a couple months back as well after the Cambridge Analytics / Facebook shitshow. Bringing API management and security out into the open, making it more observable and accountable, which is the way it should be in my opinion–otherwise we are going to keep seeing the same games being played we’ve seen with high profile breaches like Equifax, and API management lapses like we see at Facebook.

This is why I find the Ping Identity acquisition of ElasticBeam interesting and noteworthy. The acquisition reflect the evolving world of API security, but also has real world applications as part of important models for how we need to be conducting API operations at scale. ElasticBeam is a partner of mine, and I’ve been talking with them and the Ping Identity team since the acquisition. I’ll keep talking with them about their road map, and I’ll keep understanding how they apply to the world of API management and security. I feel the acquisition reflects the movement in API security I’ve been wanting to see for a while, moving us beyond just authentication and API management, looking at API security through an external lens, exploring the potential of machine learning, but also not leaving everything we’ve learned so far behind.


Working To Keep Programming At Edges Of The API Conversation

I’m fascinated by the dominating power of programming languages. There are many ideological forces at play in the technology sector, but the dogma that exists within each programming language community continues to amaze me. The potential absence of programming language dogma within the world of APIs is one of the reasons I feel it has been successful, but alas, other forms of dogma tends to creep in around specific API approaches and philosophies, making API evangelism and adoption always a challenge.

The absence of programming languages in the API design, management, and testing discussion is why they have been so successful. People in these disciplines have ben focused on the language agnostic aspects of just doing business with APIs. It is also one of the reasons the API deployment conversation still is still so fragmented, with so many ways of getting things done. When it comes to API deployment, everyone likes to bring their programming language beliefs to the table, and let it affect how we actually deliver this API, and in my opinion, why API gateways have the potential to make a comeback, and even excel when it comes to playing the role of API intermediary, proxy, and gateway.

Programming language dogma is why many groups have so much trouble going API first. They see APIs as code, and have trouble transcending the constraints of their development environment. I’ve seen many web or HTTP APIs called Java API, Python APIs, or reflect a specific language style. It is hard for developers to transcend their primary programming language, and learn multiple languages, or think in a language agnostic way. It is not easy for us to think out of our boxes, and consider external views, and empathize with people who operate within other programming or platform dimensions. It is just easier to see the world through our lenses, making the world of APIs either illogical, or something we need to bend to our way of doing things.

I’m in the process of evolving from my PHP and Node.js realm to a Go reality. I’m not abandoning the PHP world because many of my government and institutional clients still operate in this world, and I’m using Node.js for a lot of serverless API stuff I’m doing. However I can’t ignore the Go buzz I keep coming stumbling upon. I also feel like it is time for a paradigm shift, forcing me out of my comfort zone and push me to think in a new language. This is something I like to do every five years, shifting my reality, keeping me on my toes, and forcing me to not get too comfortable. I find that this keeps me humble and thinking across programming languages, which is something that helps me realize the power of APIs, and how they transcend programming languages, and make data, content, algorithms, and other resources more accessible via the web.


People Still Think APIs Are About Giving Away Your Data For Free

After eight years of educating people about sensible API security and management, I’m always amazed at how many people I come across who still think public web APIs are about giving away access to your data, content, and algorithms for free. I regularly come across very smart people who say they’d be doing APIs, but they depend on revenue from selling their data and content, and wouldn’t benefit from just putting it online for everyone to download for free.

I wonder when we stopped thinking the web was not about giving everything away for free? It is something I’m going to have to investigate a little more. For me, it shows how much education we still have ahead of us when it comes to informing people about what APIs are, and how to properly manage them. Which is a problem, when many of the companies I’m talking to are most likely doing APIs to drive internal systems, and public mobile applications. They are either unaware of the APIs that already exist across their organization, or think that because they don’t have a public developer portal showcasing their APIs, that they are much more private and secure than if they were openly offering them to partners and the public.

Web API management has been around for over a decade now. Requiring ALL developers to authenticate when accessing any APIs, and the ability to put APIs into different access tiers, limit that the rate of consumption, while logging and billing for all API consumption isn’t anything new. Amazon has been extremely public about their AWS efforts, and the cloud isn’t a secret. The fact that smart business leaders see all of this and do not see that APIs are driving it all represents a disconnect amongst business leadership. It is something I’m going to be testing out a little bit more to see what levels of knowledge exist across many fortune 1000 companies, helping paint of picture of how they view the API landscape, and help me quantify their API literacy.

Educating business leaders about APIs has been a part of my mission since I started API Evangelist in 2010. It is something that will continue to be a focus of mine. This lack of awareness is why we end up with damaging incidents like the Equifax breach, and the Cambridge Analytica / Facebook scandal. Its how we end up with so many trolls on Twitter, and an out of balance API ecosystems across federal, state, and municipal governments. It is a problem that we need to address in the industry, and work to help educate business leaders around common patterns for securing and managing our API resources. I think this process always begins with education and API literacy, but is a symptom of the disconnect around storytelling about public vs private APIs, when in reality there are just APIs that are secured and managed properly, or not.


Making Connections At The API Management Layer

</pI've been evaluating API management providers, and this important stop along the API lifecycle in which they serve for eight years now. It is a space that I'm very familiar with, and have enjoyed watching it mature, evolve, and become something that is more standardized, and lately more commoditized. I've enjoyed watching the old guard (3Scale, Apigee, and Mashery) be acquired, and API management be baked into the cloud with AWS, Azure, and Google. I've also had fun learning about Kong, Tyk, and the next generation API management providers as they grow and evolve, as well as some of the older players like Axway as they work to retool so that they can compete and even lead the charge in the current environment. I am renewing efforts to study what each of the API management solutions provide, pushing forward my ongoing API management research, understanding what the current capacity of the active providers are, and potentially they are pushing forward the conversation. One of the things I'm extremely interested in learning more about is the connector, plugin, and extensibility opportunities that exist with each solution. Functionality that allows other 3rd party API service providers to inject their valuable services into the management layer of APIs, bringing other stops along the API lifecycle into management layer, allowing API providers to do more than just what their API management solution delivers. Turning the API management layer into much more than just authentication, service plan management, logging, analytics, and billing. Over the last year I've been working with [API security provider ElasticBeam](https://www.elasticbeam.com/) to help make sense of what is possible at the API management layer when it comes to securing our APIs. ElasticBeam can analyze the surface area of an API, as well as the DNS, web, API management, web server, and database logs for potential threats, and apply their machine learning models in real time. Without direct access at the API management layer, ElasticBeam is still valuable but cannot respond in real-time to threats, shutting down keys, blocking request, and other threats being leveraged against our API infrastructure. Sure, you can still respond after the fact based upon what ElasticBeam learns from scanning all of your logs, but without being able to connect directly into your API management layer, the effectiveness of their security solution is significantly diminished. Complimenting, but also contrasting ElasticBeam, I'm also working with [Streamdata.io](http://streamdata.io) to help understand how they can be injected at the API management layer, adding an event-driven architectural layer to any existing API. The first part of this would involve turning high volume APIs into real time streams using Server-Sent Events (SSE). With future advancements focused on topical streaming, webhooks, and WebSub enhancements to transform simple request and response APIs into event-driven streams of information that only push what has changed to subscribers. Like ElasticBeam, Streamdata.io would benefit being directly baked into the API management layer as a connector or plugin, augmenting the API management layer with a next generation event-driven layer that would compliment what any API management solution brings to the table. Without an extensible connector or plugin layer at the API management layer you can't inject additional services like security with ElasticBeam, or event-driven architecture like Streamdata.io. I'm going to be looking for this type of extensibility as I profile the features of all of the active API management providers. I'm looking to understand the core features each API management provider brings to the table, but I'm also looking to understand how modern these API management solutions are when it comes to seamlessly working with other stops along the API lifecycle, and specifically how these other stops can be serviced by other 3rd party providers. Similar to my regular rants about API service providers always having APIs, you are going to hear me rant more about API service providers needing to have connector, plugin, and other extensibility features. API management service providers can put their APIs to work driving this connector and plugin infrastructure, but it should allow for more seamless interaction and benefits for their customers, that are brought to the table by their most trusted partners.


Measuring Value Exchange Around Government Data Using API Management

I’ve written about how the startup community has driven the value exchange that occurs at the API management layer down a two lane road with API monetizatin and plans. To generate the value that investors have been looking for, we have ended up with a free, pro, enterprise approach to measuring value exchanged with API integration, when in reality there is so much more going on here. Something that really becomes evident when you begin evaluating the API conversations going on across government at all levels.

In conversation after conversation I have with government API folk I hear that they don’t need API management reporting, analysis, and billing features. There is a perception that government isn’t selling access to APIs, so API management measurement isn’t really needed. Missing out on a huge opportunity to be measuring, analyzing, and reporting upon API usage, and requiring a huge amount of re-education on my part to help API providers within government to understand how they need to be measuring value exchange at the API management level.

Honestly, I’d love to see government agencies actually invoicing API consumers at all levels for their usage. Even if the invoices aren’t in dollars. Measuring API usage, reporting, quantifying, and sending an invoice for usage each month would help bring awareness to how government digital resources are being put to use (or not). If all data, content, and algorithms in use across federal, state, and municipal government were available via APIs, then measured, rate limited, and reported upon across all internal, partner, and public consumers–the awareness around the value of government digital assets would increase significantly.

I’m not saying we should charge for all access to government data. I’m saying we should be measuring and reporting upon all access to government data. The APIs, and a modern API management layer is how you do this. We have the ability to measure the value exchanged and generated around government digital resources, and we shouldn’t let misconceptions around how the private sector measures and generates revenue at the API layer interfere with government agencies benefitting from this approach. If you are publishing an API at a government agency, I’d love to learn more about how you are managing access to your APIs, and how you are reporting upon the reading and writing of data across applications, and learn more about how you are sharing and communicating around these consumption reports.


API Gateway As A Just Node And Moving Beyond Backend and Frontend

The more I study where the API management, gateway, and proxy layer is headed, the less I’m seeing a front-end or a backend, I’m seeing just a node. A node that can connect to existing databases, or what is traditionally considered a backend system, but also can just as easily proxy and be a gateway to any existing API. A lot has been talked about when it comes to API gateways deploying APIs from an existing database. There has also been considerable discussion around proxying existing internal APIs or web services, so that we can deploy newer APIs. However, I don’t think there has been near as much discussion around proxying other 3rd party public APIs–which flips the backend and frontend discuss on its head for me.

After profiling the connector and plugin marketplaces for a handful of the leading API management providers I am seeing a number of API deployment opportunities for Twilio, Twitter, SendGrid, etc. Allowing API providers to publish API facades for commonly used public APIs, and obfuscate away the 3rd party provider, and make the API your own. Allowing providers to build a stack of APIs from existing backend systems, private APIs and services, as well as 3rd party public APIs. Shifting gateways and proxies from being API deployment and management gatekeepers for backend systems, to being nodes of connectivity for any system, service, and API that you can get your hand on. Changing how we think about designing, deploying, and managing APIs at scale.

I feel like this conversation is why API deployment is such a hard thing to discuss. It can mean so many different things, and be accomplished in so many ways. It can be driven by a databases, or strictly using code, or be just taking an existing API and turning it into something new. I don’t think it is something that is well understood amongst developer circles, let alone in the business world. An API gateway can be integration just as much as it can be about deployment. It can be both simultaneously. Further complexing what APIs are all about, but also making the concept of the API gateway a more powerful concept, continuing to renew this relic of our web services past, into something that will accommodate the future.

What I really like about this notion, is that it ensures we will all be API providers as well as consumers. The one-sided nature of API providers in recent years has always frustrated me. It has allowed people to stay isolated within their organizations, and not see the world from the API consumer perspective. While API gateways as a node won’t bring everyone out of their bubble, it will force some API providers to experience more pain than they have historically. Understanding what it takes to to not just provide APIs, but what it takes to do so in a more reliable, consistent and friendly way. If we all feel the pain our integration and have to depend on each other, the chances we will show a little more empathy will increase.


API Management In Real-Time Instead Of After The Media Shitstorm

I have been studying API management for eight years now. I’ve spent a lot of time understanding the approach of leading API providers, and the services and tools put out there by API service providers from 3Scale to Tyk, and how the cloud service providers like AWS are baking API management into their clouds. API management isn’t the most exciting aspect of doing APIs, but I feel it is one of the most important elements of doing APIs, delivering on the business and politics of doing APIs, which can often make or break a platform and the applications that depend on it.

Employing a common API management solution, and having a solid plan in place, takes time and investment. To do it properly takes lots of regular refinement, investment, and work. It is something that will often seem unnecessary–having to review applications, get to know developers, and consider how they fit into the picture picture. Making it something that can easily be to pushed aside for other tasks on a regular basis–until it is too late. This is frequently the case when your API access isn’t properly aligned with your business model, and there are no direct financial strings attached attached to new API users, or a line distinguishing active from inactive users, let alone governing what they can do with data, content, and algorithms that are made accessible via APIs.

The API management layer is where the disconnect between API providers and consumers occur. It is also where the connection, and meaningful engagement occurs when done right. In most cases API providers aren’t being malicious, they are just investing in the platform as their leadership has directed, and acting upon the stage their business model has set into motion. If your business model is advertising, then revenue is not directly connected to your API consumption, and you just turn on the API faucet to let in as many consumers as you can possibly attract. Your business model doesn’t require that you play gatekeeper, qualify, or get to know your API developers, or the applications they are developing. This is what we’ve seen with Facebook, Twitter, and other platforms who are experiencing challenges at the API management layer lately, due to a lack of management over the last 5+ years.

There was no incentive for Facebook or Twitter to review applications, audit their consumption, or get to know their usage patterns. The business model is an indirect one, so investment at the API management layer is not a priority. They only respond to situations once it begins to hurt their image, and potentially rise up to begin to hurt their bottom line. The problem with this type of behavior is that other API providers see this and think this is a problem with doing APIs, and do not see it as a problem with not doing API management. That having a business model which is sensibly connected to your API usage, and a team who is actively managing and engaging with your API consumers is how you deal with these challenges. If you manage your API properly, you are dealing with negative situations early on, as opposed to waiting until there is a media shitstorm before you start reigning in API consumption on your platform.

Sensible API management is something I’ve been writing about for eight years. It is something I will continue to write about for many more years, helping companies understand the importance of doing it correctly, and being proactive rather than reactive. I’ll also be regularly writing about the subject to help API consumers understand the dangers of using platforms that do not have a clear business model, as it is usually a sign of this API management disconnect I’m talking about. It is something that will ultimately cause problems down the road, and contribute to platform instability, and APIs potentially being limited or shuttered as part of these reactionary responses we are seeing from Facebook and Twitter currently. I know that my views of API management are out of sync with popular notions of how you incentivize exponential growth on a platform, but my views are in alignment with healthy API ecosystems, successful API consumers, and reliable API providers who are around for the long term.


Facebook And Twitter Only Now Beginning To Police Their API Applications

I’ve been reading about all the work Facebook and Twitter have been doing over the last couple of weeks to begin asserting more control over their API applications. I’m not talking about the deprecation of APIs, that is a separate post. I’m focusing on them reviewing applications that have access to their API, and shutting off access to the ones who are’t adding value to the platform and violating the terms of service. Doing the hard work to maintain a level of quality on the platform, which is something they should have been doing all along.

I don’t want to diminish the importance of the work they are doing, but it really is something that should have been done along the way–not just when something goes wrong. This kind of behavior really sets the wrong tone across the API sector, and people tend to focus on the thing that went wrong, rather than the best practices of what you should be doing to maintain quality across API operations. Other API providers will hesitate launching public APIs because they’ll not want to experience the same repercussions as Facebook and Twitter have, completely overlooking the fact that you can have public APIs, and maintain control along the way. Setting the wrong precedent for API providers to emulate, and damaging the overall reputation of operating public APIs.

Facebook and Twitter have both had the tools all along to police the applications using their APIs. The problem is the incentives to do so, and to prioritize these efforts isn’t there, due to an imbalance with their business model, and a lack of diversity in their leadership. When you have a bunch of white dudes with a libertarian ethos pushing a company towards profitability with a advertising driven business model, investing in quality control at the API management layer just isn’t a priority. You want as may applications, users, and activity as you possibly can, and when you don’t see the abuse, harassment, and other illnesses, there really is no problem from your vantage point. That is, until you get called out in the press, or are forced to testify in front of congress. The reasons us white dudes get away with this is that there are no repercussions, we just get to ignore until it becomes a problem, apologize, perform a little bit to show we care, and wait until the next problem occurs.

This is the wrong API model to put out there. API providers need to see the benefits of properly reviewing applications that want access to their APIs, and the value of setting a higher bar for how applications use the API. There should be regular reviews of active APIs, and audits of how they are accessing, storing, and putting resources to work. This isn’t easy work, or inexpensive to do properly. It isn’t something you can put off until you get in trouble. It is something that should be done from the beginning, and conducted regularly, as part of the operations of a well funded team. You can have public APIs for a platform, and avoid privacy, security, and other shit-shows. If you need an example of doing it well, look at Slack, who has a public API that is successful, even with a high level of bot automation, but somehow manages to stay out of the spotlight for doing dumb things. It is because their API management practices are in better alignment with their business model–the incentives are there.

For the next 3-5 years I’m going to have to hear from companies who aren’t doing public APIs, because they don’t want to make the same mistake as Facebook and Twitter. All because Facebook and Twitter have been able to get away with such bad behavior for so long, avoid doing the hard work of managing their API platforms, and receive so much bad press. All in the name of growth and profits at all cost. Now, I’m going to have to write a post every six months showcasing Facebook and Twitter as pioneers for how NOT to run your platforms, explaining the importance of healthy API management practices, and investing in your API teams so they have the resources to do it properly. I’d rather have positive role models to showcase rather than poorly behaved role models who I have to work overtime to change perception and alter API provider’s behavior. As an API community let’s learn from what has happened and invest properly in our API management layers, properly screen and get to know who is building application on our resources, and regularly tune into and audit their behavior. Yes, it takes more investment, time, and resources, but in the end we’ll all be better off for it.


Venture Capital Obfuscating The Opportunities For Value Exchange At The Api Management Level

404: Not Found


Nest Branding And Marketing Guidelines

I’m always looking out for examples of API providers who have invested energy into formalizing process around the business and politics of API operations. I’m hoping to aggregate a variety of approaches that I can aggregate into a single blueprint that I can use in my API storytelling and consulting. The more I can help API providers standardize what they do, the better off the API sector will be, so I’m always investing in the work that API providers should be doing, but doesn’t always get prioritized.

The other day while profiling the way that Nest uses Server-Sent Events (SSE) to stream activity via thermostats, cameras, and the other devices they provide, and I stumbled across their branding policies. It provides a pretty nice set of guidance for Nest developers in respect to the platform’s brand, and something you don’t see with many other API providers. I always say that branding is the biggest concern for new API providers, but also the one that is almost never addressed by API providers who are in operation–which doesn’t make a whole lot of sense to me. If I had a major corporate brand, I’d work to protect it, and help developers understand what is important.

The Nest marketing program is intended to qualify applications, and then allow them to use the “Works with Nest” branding in your marketing and social media. To get approved you have submit your product or service for review. As part of the review process verifies that you are in compliance with all of their branding policies, including:

To apply for the program you have to email them with all the following details regarding the marketing efforts our your product or service where you will be using the “Works with Nest” branding:

  • Description of your marketing program
  • Description of your intended audience
  • Planned communication (list all that apply): Print, Radio, TV, Digital Advertising, OOH, Event, Email, Website, Blog Post, Email, Social Post, Online Video, PR, Sales, Packaging, Spec Sheet, Direct Mail, App Store Submission, or other (please specify)
  • High resolution assets for the planned communication (please provide URL for file download); images should be at least 1000px wide
  • Planned geography (list all that apply): Global, US, Canada, UK, or other (please specify)
  • Estimated reach: 1k - 50k, 50k- 100k, 100k - 500k, or 500k+
  • Contact Information: First name, last name, email, company, product name, and phone number

Once submitted, Nest says they’ll provide feedback or approval of your request within 5 business days, and if all is well, they’ll approve your marketing plan for that Works with Nest product. If they find any submission is not in compliance with their branding policies, they’ll ask you to make corrections to your marketing, so you can submit again. I don’t have too many examples of marketing and branding submission process as part of my API branding research. I have the user interface guide, and trademark policy as building blocks, but the user experience guide, and the details of the submission process are all new entries to my research.

I feel like API providers should be able to defend the use of their brand. I do not feel API providers can penalize and cut-off API consumers access unless there are clear guidelines and expectations presented regarding what is healthy, and unhealthy behavior. If you are concerned about how your API consumers reflect your brand, then take the time to put together a program like Nest has. You can look at my wider API branding research if you need other examples, or I’m happy to dive into the subject more as part of a consulting arrangement, and help craft an API branding strategy for you. Just let me know.

Disclosure: I do not “Work with Nest” – I am just showcasing the process. ;-)


Tyk Is Conducting API Surgery Meetups

I was having one of my regular calls with the Tyk team as part of our partnership, discussing what they are up to these days. I’m always looking to understand their road map, and see where I can discover any stories to tell about what they are up to. A part of their strategy to build awarness around their API management solution that I found was interesting, was the API Surgery event they held in Singapore last month, where they brought together API providers, developers, and architects to learn more about how Tyk can help them out in their operations.

API surgery seems like an interesting evolution in the Meetup formula. They have a lot of the same elements as a regular Meetup like making sure there was pizza and drinks, but instead of presentations, they ask folks to bring their APIs along, and they walk them through setting up Tyk, and deliver an API management layer for their API operations. If they don’t have their own API, no problem. Tyk makes sure there are test APIs for them to use while learning about how things work. Helping them understand how to deliver API developer onboarding, documentation, authentication, rate limiting, monitoring, analytics, and the other features that Tyk delivers.

They had about 12 people show up to the event, with a handful of business users, as well as some student developers. They even got a couple of new clients from the event. It seems like a good way to not beat around the bush about what an API service provider is wanting from event attendees, and getting down to the business at hand, learning how to secure and manage your API. I think the Meetup format still works for API providers, and service providers looking to reach an audience, but I like hearing about evolutions in the concept, and doing things that might bring out a different type of audience, and cut out some of the same tactics we’ve seen play out over the last decade.

I could see Meetups like this working well at this scale. You don’t need to attract large audiences with this approach. You just need a handful of interested people, looking to learn about your solution, and understand how it solves a problem they have. Tyk doesn’t have to play games about why they are putting on the event, and people get the focus time with a single API service provider. Programming language meetups still make sense, but I think as the API sector continues to expand that API service provider, or even API provider focused gatherings can also make sense. I’m going to keep an eye on what Tyk is doing, and look for other examples of Meetups like this. It might reflect some positive changes out there on the landscape.

Disclosure: Tyk is an API Evangelist partner.


That Point Where API Session Management Become API Surveillance

I was talking to my friends TC2027 Computer and Information Security class at Tec de Monterrey via a Google hangout today, and one of the questions I got was around managing API sessions using JWT, which was spawned from a story about security JWT. A student was curious about managing session across API consumption, while addressing securing concerns, making sure tokens aren’t abused, and there isn’t API consumption from 3rd parties who shouldn’t have access going unnoticed.

I feel like there are two important, and often competing interests occurring here. We want to secure our API resources, making sure data isn’t leaked, and prevent breaches. We want to make sure we know who is accessing resources, and develop a heightened awareness regarding who is accessing what, and how they are putting them to use. However, the more we march down the road of managing session, logging, analyzing, tracking, and securing our APIs, we are also simultaneously ramping up the surveillance of our platforms, and the web, mobile, network, and device clients who are putting our resources to use. Sure, we want to secure things, but we also want to think about the opportunity for abuse, as we are working to manage abuse on our platforms.

To answer the question around how to track sessions across API operations I recommended thinking about that identification layer, which includes JWT and OAuth, depending on the situation. After that you should be looking other dimensions for identifying session like IP address, timestamps, user agent, and any other identifying characteristics. An app or user token is much more about identification, than it ever provides actual security, and to truly identify a valid session you should have more than one dimension beyond that key to acknowledge valid sessions, as well as just session in general. Identifying what healthy sessions look like, as well as unhealthy, or unique sessions that might be out of the realm of normal operations.

To accomplish all of this, I recommend implementing a modern API management solution, but also pulling in logging from all other layers including DNS, web server, database, and any other system in the stack. To be able to truly identify healthy and unhealthy sessions you need visibility, and synchronicity across all logging layers of the API stack. Does the API management logs reflect DNS, and web server, etc. This is where access tiers, rate limits, and overall consumption awareness really comes in, and having the right tools to lock things down, freeze keys and tokens, as well as being able to identify what healthy API consumption looks like, providing a blueprint for what API sessions should, or shouldn’t be occurring.

At this point in the conversation I also like to point out that we should be stopping and considering at what point all of this API authentication, security, logging, analysis, and reporting and session management becomes surveillance. Are we seeking API security because it is what we need, or just because it is what we do. I know we are defensive about our resources, and we should be going the distance to keep data private and secure, but at some point by collecting more data, and establishing more logging streams, we actually begin to work against ourselves. I’m not saying it isn’t worth it in some cases, I am just saying that we should be questioning our own motivations, and the potential for introducing more abuse, as we police, surveil, and secure our APIs from abuse.

As technologists, we aren’t always the best at stepping back from our work, and making sure we aren’t introducing new problems alongside our solutions. This is why I have my API surveillance research, alongside my API authentication, security, logging, and other management research. We tend to get excited about, and hyper focused on the tech for tech’s sake. The irony of this situation is that we can also introduce exploitation and abuse around our practices for addressing exploitation and abuse around our APIs. Let’s definitely keep having conversations around how we authenticate, secure, and log to make sure things are locked down, but let’s also make sure we are having sensible discussions around how we are surveilling our API consumers, and end users along the way.


The Concept Of API Management Has Expanded So Much the Concept Should Be Retired

API management was the first area of my research I started tracking on in 2010, and has been the seed for the 85+ areas of the API lifecycle I’m tracking on in 2017. It was a necessary vehicle for the API sector to move more mainstream, but in 2017 I’m feeling the concept is just too large, and the business of APIs has evolved enough that we should be focusing in on each aspect of API management on its own, and retire the concept entirely. I feel like at this point it will continue to confuse, and be abused, and that we can get more precise in what we are trying to accomplish, and better serve our customers along the way.

The main concepts of API management at play have historically been about authentication, service composition, logging, analytics, and billing. There are plenty of other elements that have often been lumped in there like portal, documentation, support, and other aspects, but securing, tracking, and generating revenue from a variety of APIs, and consumers has been center stage. I’d say that some of the positive aspects of the maturing and evolution of API manage include more of a focus on authentication, as well as the awareness introduced by logging and analytics. I’d say some areas that worry me is that security discussions often stop with API management, and we don’t seem to be having evolved conversations around service conversation, billing, and monetization of our API resources. You rarely see these things discussed when we talk about GraphQL, gRPC, evented architecture, data streaming, and other hot topics in the API sector.

I feel like the technology of APIs conversations have outpaced the business of APIs conversations as API management matured and moved forward. Advancements in logging, analytics, and reporting have definitely advanced, but understanding the value generated by providing different services to different consumers, seeing the cost associated with operations, and the value generated, then charging or even paying consumers involved in that value generation in real-time, seems to be being lost. We are getting better and the tech of making our digital bits more accessible, and moving them around, but we seemed to be losing the thread about quantifying the value, and associating revenue with it in real-time. I see this aspect of API management still occurring, I’m just not seeing the conversations around it move forward as fast as the other areas of API management.

API monetization and plans are two separate area of my research, and are something I’ll keep talking about. Alongside authentication, logging, analysis, and security. I think the reason we don’t hear more stories about API service composition and monetization is that a) companies see this as their secret sauce, and b) there aren’t service providers delivering in these areas exclusively, adding to the conversation. How to rate limit, craft API plans, set pricing at the service and tier levels are some of the most common questions I get. Partly because there isn’t enough conversation and resources to help people navigate, but also because there is insecurity, and skewed views of intellectual property and secret sauce. People in the API sector suck at sharing anything they view is their secret sauce, and with no service providers dedicated to API monetization, nobody is pumping the story machine (beyond me).

I’m feeling like I might be winding down my focus on API management, and focus in on the specific aspects of API management. I’ve been working on my API management guide over the summer, but I’m thinking I’ll abandon it. I might just focus on the specific aspects of conducting API management. IDK. Maybe I’ll still provide a 100K view for people, while introducing separate, much deeper looks at the elements that make up API management. I still have to worry about onboarding the folks who haven’t been around in the sector for the last ten years, and help them learn everything we all have learned along the way. I’m just feeling like the concept is a little dated, and is something that can start working against us in some of the conversations we are having about our API operations, where some important elements like security, and monetization can fall through the cracks.


The US Postal Service Wakes Up To The API Management Opportunity In New Audit

The Office Of Inspector General for US Postal Service published an audit report on the federal agencies API strategy, which has opened their eyes to the potential of API management, and the direct value it can bring to their customers, and their business. The USPS has some extremely high value APIs that are baked into ecommerce solutions around the country, and have even launched an API management solution recently, but until now have not been actively analyzing and using API usage to guide them in any of their business planning decisions.

According to the report, “The Postal Service captures customer API usage data and distributes it to stakeholders outside of the Web Tools team via spreadsheets every month. However, management is not using that data to plan for future API needs. This occurred because management did not agree on which group was responsible for reviewing and making decisions about captured usage data.” I’m sure this is common in other agencies, as APIs are often evolved within IT groups, that can have significant canyons between them and any business units. Data isn’t shared, unless a project specifically designates it to be shared, or leadership directs it, leaving real-time API management data out of reach of those business groups making decisions.

It is good to see another federal agency wake up to the potential of API management, and the awareness it can bring to business groups. It’s not just some technical implementation with logfiles, it is actual business intelligence that can be used to guide the agency forward, and help an agency better serve constituents (customers). The awareness introduced by doing APIs, and then properly managing APIs, analyzing usage, and building and understanding what is happening, is a journey. It’s a journey that not all federal agencies have even begun (sadly). It is important that other agencies follow USPS lead, because it is likely you are already gathering valuable data, and just passing it on to external partners like USPS has been doing, not capturing any of the value for yourself. Compounding the budget, and other business challenges you are already facing, when you could be using this data to make better informed decisions, or even more important, establishing new revenue streams from your valuable public sector resources.

While it may seem far fetched at the moment, but this API management layer reflects the future of government revenue and tax base. This is how companies in the private sector are generating revenue, and if commercial partners are building solutions on top of public sector data and other digital resources, these government agencies should be able to generate new revenue streams from these partnerships. This is how government works with physical public resources, there should be no difference when it comes to digital public resources. We just haven’t reached the realization that this is the future of how we make sure government is funded, and has the resources it needs to not just compete in the digital world, but actually innovate as many of us hope it will. It will take many years for federal agencies to get to this point. This is why they need to get started on their API journey, and begin managing their data assets in an organized way as the USPS is beginning to do.

API management has been around for a decade. It isn’t some new concept, and their are plenty of open source solutions available for federal agencies to put to use. All the major cloud platforms have it baked into their operations, making it a commodity, alongside compute, storage, DNS, and the other building blocks of our digital worlds. I’ll be looking for other ways to influence government leadership to light the API fire within federal agencies like the Office of the Inspector General has done at the U.S. Postal Service. It is important that agencies be developing awareness, and making business decisions from the APIs they offer, just like they are doing from their web properties. Something that will set the stage for future for how the government serves its constituents, customers, and generates the revenue it needs to keep operating, and even possibly leading in the digital evolution of the public sector.


Disaster API Rate Limit Considerations

This API operations consideration won’t apply to every API, but for APIs that provide essential resources in a time of need, I wanted to highlight an API rate limit cry for help that came across my desk this weekend. Our friend over at Pinboard alerted me to someone in Texas asking for some help in getting Google to increase the Google Maps API rate limits for an app they were depending on as Hurricane Harvey:

The app they depended on had ceased working and was showing a Google Maps API rate limit error, and they were trying to get the attention of Google to help increase usage limits. As Pinboard points out, it would be nice if Google had more direct support channels to make requests like this, but it would be also great if API providers were monitoring API usage, aware of applications serving geographic locations being impacted, and would relax API rate limiting on their own. There are many reasons API providers leverage their API management infrastructure to make rate limit exceptions and natural disasters seems like it should be top of the list.

I don’t think API providers are being malicious with rate limits in this area. I just think it is yet another area where technologists are blind to the way technology is making an impact (positive or negative) on the world around us. Staying in tune to the needs of applications that help people in their time of need seems like it will have to components, 1) knowing your applications (you should be doing this anyways) and identifying the ones that have a public service, and 2) staying in tune with natural and other disasters that are happening around the world. We see larger platforms like Facebook and Twitter rolling out permanent solutions to assist communities in their times of needs, and it seems like something that other smaller platforms should be tuning into as well.

Disaster support and considerations will be an area of API operations I’m going to consider adding into my research, and spending more time to identify best practices, and what platforms are doing to better serve communities in a time of need using APIs.


The Importance Of API Stories

I am an API storyteller before am an API architect, designer, or evangelist. My number one job is to tell stories about the API space. I make sure there is always (almost) 3-5 stories a day published to API Evangelist about what I’m seeing as I conduct my research on the sector, and thoughts I’m having while consulting and working on API projects. I’ve been telling stories like this for seven years, which has proven to me how much stories matter in the world of technology, and the worlds that it is impacting–which is pretty much everything right now.

Occasionally I get folks who like to criticize what I do, making sure I know that stories don’t matter. That nobody in the enterprise or startups care about stories. Results are what matter. Ohhhhh reeeaaaly. ;-) I hate to tell you, it is all stories. VC investment in startups is all about the story. The markets all operate on stories. Twitter. Facebook. LinkedIn. Medium. TechCrunch. It is all stories. The stories we tell ourselves. The stories we tell each other. The stories we believe. The stories we refuse to believe. It is all stories. Stories are important to everything.

The mythical story about Jeff Bezos’s mandate that all employees needed to use APIs internally is still 2-3% of my monthly traffic, down from 5-8% for the last couple of years, and it was written in 2012 (five years ago). I’ve seen this story on the home page of the GSA internal portal, and framed hanging on the wall in a bank in Amsterdam. Stories are important. Stories are still important when they aren’t true, or partially true, like the Amazon mythical tale is(n’t). Stories are how we make sense of all this abstract stuff, and turn it into relatable concepts that we can use within the constructs of our own worlds. Stories are how the cloud became a thing. Stories are why microservices and devops is becoming a thing. Stories are how GraphQL wants to be a thing.

For me, most importantly, telling stories is how I make sense of the world. If I can’t communicate something to you here on API Evangelist, it isn’t filed away in my mental filing cabinet. Telling stories is how I have made sense of the API space. If I can’t articulate a coherent story around API related technology, and it just doesn’t make sense to me, it probably won’t stick around in my storytelling, research, and consulting strategy. Stories are everything to me. If they aren’t to you, it’s probably because you are more on the receiving end of stories, and not really influencing those around you in your community, and workplace. Stories are important. Whether you want to admit it or not.


Which Platforms Have Control Over The Conversation Around Their Bots

I spend a lot of time monitoring API platforms, thinking about different ways of identifying which ones are taking control of the conversation around how their platforms operate. One example of this out in the wild can be found when it comes to bots, by doing a quick look at which of the major bot platforms own the conversation around this automation going on via their platforms.

First you take a look at Twitter, by doing a quick Google search for Twitter Bots:

Then you take a look at Facebook, by doing a quick Google search for Facebook Bots:

Finally take a look at Slack, by doing a quick Google search for Slack Bots:

It is pretty clear who owns the conversation when it comes to bots on their platform. While Twitter and Facebook both have information and guidance about doing bots they do not own the conversation like Slack does. Something that is reflected in the search engine placement. It is also something that sets the tone of the conversation that is going on within the community, and defines the types of bots that will emerge on the platform.

As I’ve said before, if you have a web or mobile property online today, you need to be owning the conversation around your API or someone eventually will own it for you. The same comes to automation around your platform, and the introduction of bots, and automated users, traffic, advertising, and other aspects of doing business online today. Honestly, I wouldn’t want to be in the business of running a platform these days. It is why I work so hard to dominate and own my own presence, just so that I can beat back what is said about me, and own the conversation on at least Google, Twitter, LinkedIn, Facebook, and Github.

Seems like to me, if you are going to enable automation on your platform via APIs, it should be something that you own completely, and make sure you provide some pretty strong guidance and direction.


The ElasticSearch Security APIs

I was looking at the set of security APIs over at Elasticsearch as I was diving into my API security research recently. I thought the areas they provide security APIs for the search platform was worth noting and including in not just my API security research, but also search, deployment, and probably overlap with my authentication research.

  • Authenticate API - The Authenticate API enables you to submit a request with a basic auth header to authenticate a user and retrieve information about the authenticated user.
  • Clear Cache API - The Clear Cache API evicts users from the user cache. You can completely clear the cache or evict specific users.
  • User Management APIs - The user API enables you to create, read, update, and delete users from the native realm. These users are commonly referred to as native users.
  • Role Management APIs - The Roles API enables you to add, remove, and retrieve roles in the native realm. To use this API, you must have at least the manage_security cluster privilege.
  • Role Mapping APIs - The Role Mapping API enables you to add, remove, and retrieve role-mappings. To use this API, you must have at least the manage_security cluster privilege.
  • Privilege APIs - The has_privileges API allows you to determine whether the logged in user has a specified list of privileges.
  • Token Management APIs - The token API enables you to create and invalidate bearer tokens for access without requiring basic authentication. The get token API takes the same parameters as a typical OAuth 2.0 token API except for the use of a JSON request body.

Come to think of it, I’ll add this to my API management research as well. Much of this overlaps with what should be a common set of API management services as well. Like much of my research, there are many different dimensions to my API security research. I’m looking to see how API providers are securing their APIs, as well as how service providers are selling security services to APIs providers. I’m also keen on aggregating common API design patterns for security APIs, and quantity how they overlap with other stops along the API lifecycle.

While the cache API is pretty closely aligned with delivering a search API, I think all of these APIs provide a potential building block to think about when you are deploying any API, and represents the Venn diagram that is API authentication, management, and security. I’m going through the rest of the Elasticsearch platform looking for interesting approaches to ensuring their search solutions are secure. I don’t feel like there are any search specific characteristics of API security that I will need to include in my final API security industry guide, but Elasticsearch’s approach has re-enforced some of the existing security building blocks I already had on my list.


When You See API Rate Limiting As Security

I’m neck deep into my assessment of the world of API security this week, a process which always yields plenty of random thoughts, which end up becoming stories here on the blog. One aspect of API security I keep coming across in this research is the concept of API rate limiting as being security. This is something I’ve long attributed with API management service providers making their mark on the API landscape, but as I dig deeper I think there is more to this notion of what API security is (or isn’t). I think it has more to do with API providers, than companies selling their warez to these API providers.

The API management service providers have definitely set the tone for API security conversation(good), by standing up a gateway, and providing tools for limiting what access is available–I think many data, content, and algorithmic stewards are very narrowly focus on security being ONLY about limiting access to their valuable resources. Many folks I come across see their resources as valuable, when they begin doing APIs they have a significant amount of concern around putting their resources on the Internet, and once you secure and begin rate limiting things, all security concerns appear to have been dealt with. Competitors, and others just can’t get at your valuable resources, they have to come through the gate–API security done.

Many API providers I encounter have unrealistic views of the value of their data, content, and algorithms, and when you match this with their unrealistic views about how much others want access to this valuable content you end up with a vacuum which allows for some very narrow views of what API security is. To help support this type of thinking, I feel like the awareness generated from API management is often focused on generating revenue, and not always about understanding API abuse, and is also something can create blindspots when it comes to database, server, and DNS level logging and layers where security threats emerge. I’m assuming folks often feel comfortable that the API management layer is sufficiently securing things by rate limiting, and we can see all traffic through the analytics dashboard. I’m feeling that this one of the reasons folks aren’t looking up at the bigger API security picture.

From what I’m seeing, assumptions that the API management layer is securing things can leave blind spots in other areas like DNS, threat information gathering, aggregation, collaboration, and sharing. I’ve come across API providers who are focused in on API management, but don’t have visibility at the database, server, container, and web server logging levels, and are only paying attention to what their API management dashboard provides access to. I feel like API management opened up a new found awareness for API provides, something that has evolved and spread to API monitoring, API testing, and API performance. I feel like the next wave of awareness will be in the area of API security. I’m just trying to explore ways that I can help my readers and clients better understand how to expand their vision of API security beyond their current field of vision.


We Have A Hostile CEO Which Requires A Shift In Our API Strategy

As I work my way through almost one hundred federal government API developer portals, almost 500 APIs, and 133 Github accounts for federal agencies the chilling effect of the change of leadership in this country becomes clear. You can tell the momentum across hundreds of federal agency built up over the last five years is still moving, but the silence across blogs, Twitter accounts, change logs, and Github repos shows that the pace of acceleration is in jeopardy.

When you are browsing agency developer portals you come across phrases like this, “As part of the Open Government Initiative, the BusinessUSA codebase is available on the BusinessUSA GitHub Open Source Repository.” With the link to the Open Government Initiative leading to a a page on the White House website that has been removed–something you can easily find on the Obama archives. I am coming across numerous examples like this of how the change in leadership has created a vacuum when it comes to API and open data leadership, at a time when we should be doubling down on sharing of data, content, and putting algorithms to work across the federal government.

After several days immersed in federal government developer areas it is clear we have a hostile CEO that will require us to shift in our API strategy. After six months it is clear that the current leadership has no interest transparency, observability, or even the efficiency in government that is achieved from focusing opening up data via public, but secure APIs. This doesn’t mean the end of our open data and API efforts, it just means we lose the top down leadership we’ve enjoyed for the last eight years when it came to technology in government, and efforts will have to shift to a more bottom up approach, with agencies and departments often setting their own agenda.

This is nothing new, and it won’t be the last time we face this working with APIs across the federal government. Even during times where we have full support of leaders we should always be on the look out for threats, either technical, business, or political. Across once active API efforts I’m regularly finding broken links to previous leadership documents and resources at the executive level. We need to make sure that we shift these resources to a more federated approach in the future, where we reference central resources, but keep a cached copy locally to allow for any future loss of leadership. This is one reason we should be emphasizing the usage of Github across agencies, which offloads the storage and maintenance of materials to each individual agency, group, or even at the project level.

It is easy to find yourself frustrated in the current environment being cultivated by the leadership at the moment. However, with the right planning and communication we should be able to work around, and develop API implementations that are resilient to change, whether they are technical, budgetary, or on the leadership front as we are dealing with now. Don’t give up hope. If you need someone to talk with about your project please feel free to reach out publicly or privately. There are many folks still working hard on APIs inside and outside the federal government firewall, and they need our help. If you find yourself abandoning a project, please try to make sure as much of the work is available on your agencies Github repository, including code, definitions, and any documentation. This is the best way to ensure your work will continue to live on. Thank you for your service.


API Management Across All Government Agencies

This isn’t a new drum beat for me, but is one I wanted to pick it up again as part of the federal government research and speaking I’m doing this month. It is regarding the management of APIs across federal government. In short, helping agencies successfully secure, meter, analyze, and develop awareness of who is using government API resources. API management is a commodity in the private technology sector, and is something that has been gaining momentum in government circles, but we have a lot more work ahead to get things where we need them.

The folks over at 18F have done a great job of helping bake API management into government APIs using API Umbrella, resulting in these twelve federal agencies:

This doesn’t just mean that each of these agencies are managing their APIs. It also means that all of these agencies are managing their APIs in a consistent way, using a consistent tool. Something that is allowing these agencies to effectively manage:

I know that both 18F and USDS are working are hard on this, but this is an area we need agencies to step up in, as well as the private sector. We need any vendor doing API deployment projects for any agency to work together to make sure their agency is using a standardized approach. This means that vendors should make the investment when it comes to reaching out to the GSA, and 18F to make sure you are up to speed on what is needed to leverage the work already in motion at api.data.gov.

Doing API management in a consistent way across ALL federal government APIs is super critical to all of this scaling as we all envision. The federal government possess a wealth of valuable data and content that can benefit the private sector. This isn’t just about making the federal government more transparent and observable, this is also about making these valuable resources available in a usable, sustainable way to the private sector–industries will be better off for it. I’m happy to see the progress these twelve agencies have made when it comes to API management, but we need to get to work helping every other agency play catch up, making it something that is baked into ALL API deployment projects by default.


Requiring ALL Platform Partners Use The API So There Is A Registered Application

I wrote a story about Twitter allowing users to check or uncheck a box regarding sharing data with select Twitter partners. While I am happy to see this move from Twitter, I feel the concept of information sharing being simply being a checkbox is unacceptable. I wanted to make sure I praised Twitter in my last post, but I’d like to expand upon what I’d like to see from Twitter, as well as ALL other platforms that I depend on in my personal and professional life.

There is no reason that EVERY platform we depend on couldn’t require ALL partners to use their API, resulting in every single application of our data be registered as an official OAuth application. The technology is out there, and there is no reason it can’t be the default mode for operations. There just hasn’t been the need amongst platform providers, as as no significant demand from platform users. Even if you don’t get full access to delete and adjust the details of the integration and partnership, I’d still like to see companies, share as many details as they possibly can regarding any partner sharing relationships that involve my data.

OAuth is not the answer to all of the problems on this front, but it is the best solution we have right now, and we need to have more talk about how we can make it is more intuitive, informative, and usable by the average end-users, as well as 3rd party developers, and platform operators. API plus OAuth is the lowest cost, widely adopted, standards based approach to establishing a pipeline for ALL data, content, and algorithms operate within that gives a platform the access and control they desire, while opening up access to 3rd party integrators and application developers, and most importantly, it gives a voice to end-users–we just need to continue discussing how we can keep amplifying this voice.

To the folks who will DM, email, and Tweet at me after this story. I know it’s unrealistic and the platforms will never do business like this, but it is a future we could work towards. I want EVERY online service that I depend on to have an API. I want all of them to provide OAuth infrastructure to govern identify and access management for personally identifiable information. I want ALL platform partners to be required to use a platforms API, and register an application for any user who they are accessing data on behalf. I want all internal platform projects to also be registered as an application in my OAuth management area. Crazy talk? Well, Google does it for (most of) their internal applications, why can’t others? Platform apps, partner apps, and 3rd party apps all side by side.

The fact that this post will be viewed as crazy talk by most who work in the technology space demonstrates the imbalance that exists. The technology exists for doing this. Doing this would improve privacy and security. The only reason we do not do it is because the platforms, their partners and ivnestors are too worried about being this observable across operations. There is no reason why APIs plus OAuth application can’t be universal across ALL platforms online, with ALL partners being required to access personally identifiable information through an API, with end-uses at least involved in the conversaiton, if not given full control over whether or not personally identifiable information is shared, or not.


Learning More About Amazon Alexas Approach To Apis And Skills Development

404: Not Found


Making An Account Activity API The Default

I was reading an informative post about the Twitter Account Activity API, which seems like something that should be the default for ALL platforms. In today’s cyber insecure environment, we should have the option to subscribe to a handful of events regarding our account or be able to sign up for a service that can subscribe and help us make sense of our account activity.

An account activity API should be the default for ALL the platforms we depend on. There should be a wealth of certified aggregate activity services that can help us audit and understand what is going on with our platform account activity. We should be able to look at, understand, and react to the good and bad activity via our accounts. If there are applications doing things that don’t make sense, we should be able to suspend access, until more is understood.

The Twitter Account Activity API Callback request contains three level of details:

  • direct_message_events: An array of Direct Message Event objects.
  • users: An object containing hydrated user objects keyed by user ID.
  • apps: An object containing hydrated application objects keyed by app ID.

The Twitter Account Activity API provides a nice blueprint other API providers can follow when thinking about their own solution. While the schema returned will vary between providers, it seems like the API definition, and the webhook driven process can be standardized and shared across providers.

The Twitter Account Activity API is in beta, but I will keep an eye on it. Now that I have the concept in my head, I’ll also look for this type of API available on other platforms. It is one of those ideas I think will be sticky, and if I can kick up enough dust, maybe other API providers will consider. I would love to have this level of control over my accounts, and it is also good to see Twitter still rolling out new APIs like this.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.