Quantcast
Channel: ProgrammableWeb - Developers
Viewing all 432 articles
Browse latest View live

Online Compiler

$
0
0
API Endpoint: 
https://api.jdoodle.com/v1/
API Description: 
The Online Compiler API enables program execution in 72 languages. The API can be integrated with computer education applications. Additionally, this API can be used in online interview assessment systems. The Online Compiler API features REST architecture and JSON responses.
How is this API different ?: 
SSL Support: 
Yes
API Forum / Message Boards: 
Twitter URL: 
https://twitter.com/thenutpan
Developer Support URL: 
Interactive Console URL: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Popularity: 
0
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Supported Request Formats: 
Architectural Style: 
Version: 
1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
On demand custom formats
Other(not listed): 
1
Other Response Format: 
On demand custom formats
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)

CarsXE

$
0
0
API Endpoint: 
https://api.carsxe.com
API Description: 
The CarsXE API allows external applications to securely get access to millions of vehicle records. Users can access vehicle's specifications, history records, ownership cost, market value, title status, salvage records and more for cars, motorcycles, trucks or RVs. Only pay for the calls you make. From tracking vehicle analytics in your company, energy, insurance, media, ride hailing and hospitality the use cases grow the more you think.
How is this API different ?: 
SSL Support: 
Yes
API Forum / Message Boards: 
Twitter URL: 
Developer Support URL: 
Interactive Console URL: 
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Popularity: 
0
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
0.0
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)

Spott

$
0
0
API Endpoint: 
https://spott.p.rapidapi.com/places
API Description: 
The Spott API provides tools for applications to search for cities, countries and administrative divisions by name, autocompletion or IP. It returns the place where a given IP Address is located, returns a single place identified by a Geoname ID, returns the place related to the IP where the request was performed and others. Spott builds tools for developers that enables you to search for places by a full query and more.
SSL Support: 
Yes
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other(not listed): 
0
Type of License if Non-Proprietary: 
Creative Commons attribution license
Version Status: 
Recommended (active, supported)

Spott

$
0
0
API Endpoint: 
https://spott.p.rapidapi.com/places
API Description: 
The Spott API provides tools for applications to search for cities, countries and administrative divisions by name, autocompletion or IP. It returns the place where a given IP Address is located, returns a single place identified by a Geoname ID, returns the place related to the IP where the request was performed and others. Spott builds tools for developers that enables you to search for places by a full query and more.
SSL Support: 
Yes
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
0.0
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other(not listed): 
0
Type of License if Non-Proprietary: 
Creative Commons attribution license
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Apollorion Terraform Version

$
0
0
API Endpoint: 
https://terraform.apollorion.com/
API Description: 
This api lets you get specific versions of terraform, all versions of terraform, or the latest version of terraform. Separated by architecture in an easy to use REST format.
How is this API different ?: 
SSL Support: 
Yes
Interactive Console URL: 
Support Email Address: 
Primary Category: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
Yes
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
MIT
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Apollorion Terraform Version

$
0
0
API Endpoint: 
https://terraform.apollorion.com/
API Description: 
The Apollorion Terraform Version API enables users to get specific versions of the Terraform.io infrastructure tool, all versions of Terraform, or the latest version of Terraform. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. The API is separated by architecture in an easy to use REST format. A separate organizaion, Apollorion, provides this API.
SSL Support: 
Yes
Support Email Address: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
Yes
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
1
Description File URL (if public): 
https://terraform.apollorion.com/
Description File Type: 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other(not listed): 
0
Type of License if Non-Proprietary: 
MIT
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Facebook Enhances App Dashboard

$
0
0
Super Short Hed: 
Facebook Enhances App Dashboard
Featured Graphic: 
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Featured: 
Yes
Summary: 
Facebook has introduced a number of new features to its App Dashboard. The changes target improving the app review process and giving developers more insight and control over managing permissions. Facebook also gets more information regarding how developers use the Facebook platform.

Facebook has introduced a number of new features to its App Dashboard. Facebook has pitched these new features as giving developers more information regarding the permissions they use and don't use and speeding up the App Review process. However, the new features also give Facebook more access to apps and help the company understand how developers use the Facebook platform.

First, Facebook has streamlined the process for developers to request access to permissions during the App Review process. A new tool within the dashboard allows developers to review past API calls and oft used endpoints.

Similarly, developers can now remove permissions directly from the dashboard. Historically, Facebook would message developers or send an email related to requested permissions. These messages would prompt the developer to take appropriate action. Now, developers don't have to wait for such messages and can proactively address directly in the dashboard.

To show all changes, and help developers understand all that goes into the App Review process, Facebook has created a new App Review site. The site allows developers to easily see permissions requested and improves the submissions request process. Additionally, developers can learn more about the App Review process including when to submit, how to submit, the expected length of the process, common reasons for rejection, and more.

Content type group: 
Articles
Top Match Search Text: 
Facebook Enhances App Dashboard

Apple App Store Connect

$
0
0
API Endpoint: 
https://api.appstoreconnect.apple.com/v1
API Description: 
This Apple App Store Connect API is an Apple Web Service to automate tasks on the Apple Developer website and App Store Connect. It allows you to build custom workflows as part of your application development life cycle to automate actions in App Store Connect. This API requires a JSON Web Token to authorize each request and returns responses in JSON and links to additional related resources. App Store Connect API keys are unique to this service and cannot be used for other Apple services. Apple platforms offer unique capabilities and user experiences for hardware, software, and services that are designed to work together to build intuitive, multi-faceted experiences.
How is this API different ?: 
SSL Support: 
Yes
API Forum / Message Boards: 
https://forums.developer.apple.com/search.jspa?q=App+Store+Connect+API
Twitter URL: 
https://twitter.com/applesupport
Developer Support URL: 
https://support.apple.com
Interactive Console URL: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Apple App Store Connect

$
0
0
API Endpoint: 
https://api.appstoreconnect.apple.com/v1
API Description: 
The Apple App Store Connect API is a web service to automate tasks on the Apple Developer website and App Store Connect. It allows developers to build custom workflows as part of the application development life cycle to automate actions in App Store Connect. This API requires a Web Token to authorize each request and returns responses in JSON. Apple platforms offer unique capabilities and user experiences for hardware, software, and services that are designed to work together to build intuitive, multi-faceted experiences.
SSL Support: 
Yes
API Forum / Message Boards: 
https://forums.developer.apple.com/search.jspa?q=App+Store+Connect+API
Twitter URL: 
https://twitter.com/applesupport
Developer Support URL: 
https://support.apple.com
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
1.0
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

ElenaSport

$
0
0
API Endpoint: 
API Description: 
ElenaSport.io - Your fast, reliable and affordable sports data provider. For developers, analysts, football fans and all those who need live&historical football data enriched with details, statistics, logos, pictures. Find out more: https://elenasport.io/#header-section
How is this API different ?: 
High-quality service for a few dollars per month
SSL Support: 
Yes
API Forum / Message Boards: 
https://rapidapi.com/mararrdeveloper/api/elenasport-io1/discussions
Interactive Console URL: 
https://rapidapi.com/mararrdeveloper/api/elenasport-io1/endpoints
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
https://s3-eu-west-1.amazonaws.com/www.elenasport.io/apispec_yaml.yaml
Description File Type: 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

From the DC-Area API Meetup: How an Alternative Data API Can Be Used To Improve Predictive Analysis

$
0
0
Super Short Hed: 
From the DC-Area API Meetup: How an Alternative Data API Can Be Used To Improve Predictive Analysis
Thumbnail Graphic: 
Featured Graphic: 
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Includes Video Embed: 
Yes
Summary: 
From the DC Area API Meetup, Accrue Ltd. CEO Benoit Brookens III discusses the concept of alternative data and how it can be used to improve predictive analysis. In his presentation, Brookens looks back at typhoons rated as 8 or higher and correlates their timing to movement in financial markets.

As a part of ProgrammableWeb's ongoing series of on-demand re-broadcasts of presentations that were given at the monthly Washington, DC-Area API meetup (anyone can attend), this article offers a recording and full transcript of the discussion given by Accrue Ltd. founder and CEO Benoît Brookens who is based in Hong Kong. Originally, Brookens was a securities trader who started to wonder whether seemingly unrelated events could be correlated to the change in stock market prices. He then began to plug the details of those events into a calendar in a way that he could look at the sudden rise of a stock and correlate that rise to the other events that happened on the same day (or the days just preceding).

The result of that exploration is his company Accrue and the API it offers to anyone wanting to do the same types of correlations; for example investors or analysts.

The DC-Area API Meetup almost always takes place on the first Tuesday of every month. The attendees consist of API enthusiasts and practitioners from all around the federal government as well as businesses and organizations that are local to the DC Metro area. There is no charge to attend and attendees get free pizza and beer, compliments of the sponsors. The meetup is always looking for great speakers and sustaining sponsors. If you're interested in either opportunity, please contact David Berlind at David.Berlind@programmableweb.com. If you're interested in attending, just visit the the meetup page and RSVP one of the upcoming meetups. It's that simple. 

Here's the video of Brookens' talk and the full transcript:

Developers Rock Podcast (special edition): How an Alternative Data API Can Be Used To Improve Predictive Analysis

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Full Transcript of: How Alternative Data Can Be Used To Improve Predictive Analysis

The following transcript is from Benoît Brooken III's presentation, transcribed as best as possible from the video above. As with many transcriptions of this nature, some sentences may run on, or may appear fractured. Our goal is for the transcript to be as true to the presentation as possible.

Benoît Brooken III: Just a little quick intro. My name is Benoît Brookens. I'm the founder of a big analytics company based mostly in Hong Kong. I'm here visiting, I'm a Washingtonian natively.

We are essentially an alternative data company. Alternative data is essentially this intersection of liberal arts and technology, like Steve Jobs talked about. It's basically the acceptance that every industry could benefit from some other contexts that their industry might not be currently appreciating. For instance, it's like a farmer using social media data to understand the impact of avocado prices, or looking at shipping information in order to anticipate demand or supplies of competing products in the market from a foreign place. I know that's a little high level, but in a nutshell, we're providing event-based intelligence for decision makers to make better decisions.

In 2017, we were selected as the most promising FinTech company in the world by the London Stock Exchange and the UK government. Here's us opening the UK Stock Exchange and I'm in the middle.

Anyway, the focus of this topic is demonstrating one use case that we focus [on in] what we're building. I'm not going to spoil it by telling you actually what we're building yet, but financial investors have a problem because they are now inundated by alternative events that previously never drove impacts to the level that they do now. For example, a Trump tweet is sending markets up and down, disrupting Boeing, disrupting all types of things, trade war developments happening. As they happen, they force people to put on many different new hats to assess what the impact is on their businesses, and maybe potentially their investments.

In this case, talking about APIs, I'm looking at the financial market as a simple time series dataset. It's something measured over time, like end of day sales, innovate prices of any variety of volumes. Essentially one use case is basically in Hong Kong. Last September I encountered my very first typhoon. It was a typhoon 10, in fact. This is a category five, like the one that had recently devastated The Bahamas. Buildings are swaying three feet back and forth, and I was in a panic wondering, "what should I do? Should I have hopped on a flight and gone to Thailand or somewhere where it was calmer and nicer in order to spend my day?" But I hung back. It was my first typhoon, and I rushed to the grocery store like everyone else. Rushing to the grocery store, the grocery store shelves were empty. I thought that was interesting because it happens every time.

What I did was sit at home and, having technology like the way I have, I was wondering how I can turn the observations that I was making into some insight that might or might not be reflected as a hypothesis in the financial markets? Essentially what we did, is I back tested it. I took every single typhoon 8, or more, over the past five years and I started correlating it to particular stocks. I was looking for good risk adjusted returns, meaning that these stocks typically exhibited good risk reward ratios in the financial markets.

What I uncovered was two big brands. One was a non-alcoholic beverage company and two was a brewery company. These are the second biggest in their class, meaning this is the second biggest, non-alcoholic beverage brand. They sell juice, tea, water, coffee, et cetera, and this is the second biggest beer company in China. I'm in Hong Kong by the way, so 100% of the time over the past five years, these two stocks are reacting. I found that really interesting.

This is not investment advice, I'm not telling you to go buy any stocks. This is not an investment show, but essentially what I started doing in my thinking process was that I was for the first time taking something called unstructured data. Unstructured data is growing at about 12.8 terabytes every minute and these things are coming out of a variety of things, from cameras to sensors to government websites, and they're all actually not in a condition or format to do any type of analysis. Meaning, if you wanted to understand the impact of... Say you own a sandwich shop and you also have a gelato in the back, and you want to know if you sell more sandwiches or gelatos on a rainy day.

How many businesses could actually do that? Not many. In fact there are POS's that have the data for their sales, but there's no API that they can plug into and say, "Hey, it's a rainy day in D.C., what should I do? Should I go make more sandwiches or should I just keep the gelato cold?"

We are building one of the first APIs in the world that is commercially available for the private sector to begin to do these things far more casually. This is unstructured data, it's growing at a rapid pace. What we're doing essentially is turning the world into many different types of calendars. These are religious calendars, blockchain industry calendars, seasonal calendars, sports calendars, weather calendars, natural disasters, political products, et cetera. At product calendar, would be like an iPhone release. At a corporate calendar level. It would be a CEO speech, WWDC conference, et cetera.

You can analyze the impact of this on other things that you might take for granted. It's not just about Apple stock, it can be about transportation, it can be about a smart city demanding, "how much of a traffic jam do we actually have?" If you want to see the model, that concept, before you got to a quantitative metric, you have to start with something, start with an event, start with something that you can test, a hypothesis. We're building a way for someone to take that unstructured data, turn it into something simple that's never been innovated on, which is a calendar. We all have a calendar on our phone. It's pretty much the most neglected app in our phones. It's not really innovative over the past iPhone one. It's schedules, meetings, et cetera, but it's very, very powerful.

We are taking this attributed chronology, meaning when we structure the data from the internet, say religious calendars, we tell you where it comes from. We got it from this website or we got it from this place, or we got this article from this governmental source. We're basically classifying this as a knowledge graph. You can say this is an iPhone product release. We can use a graph database to link that to its competitors. iPhone is a product line competitor of the Galaxy, and that's how it's related to Samsung. This iPhone is also a handheld device, so it's related to other handheld devices, but it's mobile, so it's these kinds of things. We're classifying all of these objects and events and activities into a variety of different things.

Blockchain is not a buzzword here. We are using primarily immutable databases. An immutable database is simply a mechanism for recording something as a version of itself over time. We don't delete anything actually, we don't delete any data. When you have different types of editing that happens at Wikipedia, things change and you might have a deletion, you don't want to start from deletionism. If there's an update of an economic record, we will say "revised" rather than "deleted" and "replaced."

Anyway, we are building an integrated global calendar in simple sense of sports, industry, beliefs, economics, weather, and there are hundreds of thousands of calendars that we build.

To give you an example of why this is important, again, these are all financial examples, not an investment advice, we were looking at Apple stock. Say you wanted to go back to 2010. 2010? What was going on at 2010? Cities don't have memories and neither do markets nor people really that good anymore. You want to highlight this little area. What was going on in the spike? You can't go to Google and say what was going on here. You really can't, you can't just type that. There's no database search for that, there's no research engine. People pay a lot of money to find that out.

We built the database, I was pointing time. You could go 8/31 through 9/8. What was taking place between those ranges? You can see the Samsung Epic 4g, which might've been an epic failure because Epic doesn't exist. Then you had iOS 4.1 announced on 9/1 and you had iOS released on 9/8. These two things were really interesting. Funny enough, there was a really interesting correlation between iOS announcements and moon phases during Steve Jobs' lifetime, that many people did not actually extract. But we were able to find the serendipity of fact that uncovered a really interesting correlation that might go beyond spiriocity into something really interesting.

Again, we're building this as an API. This doesn't have to be stock data, this could be virtually anything. You have a sushi shop and there was a sushi expo in town, and maybe your sushi sales go down. It can be anything really.

Keep going further, we compete with some existing players like Kensho, Bloomberg, Thomson Reuters, the financial sector. We cover others, but what makes us special is that you can bring your own data. We're not locking you into financial data sets or telling you that you can't add in Willy Wonka chocolate factory's event or things that are taking place in your small town. This is a very open-ended API database to allow people to do that base intelligence. I don't want to bore you with this, but this is just a sample of a dashboard. To do this would require sourcing, structuring, cleaning, executing, all this information, but a decision maker can come here and kind of get these insights. This is just one example of what we've done with this. It's a what-you-see-is-what-you-get, an if-then drag and drop algorithm builder.

If there was a tropical cyclone above 8 or more, this actually references an API dataset, that historical dataset. You could say buy 100 shares of the market. This could be used in someone's home, or it can be used in a rather large variety of places, if there's a weather event, if it's above 70 degrees, turn on the air conditioners in the home. We're experimenting with different ways to play with this type of information, being API first. We're having a really creative exercise and open to feedback and ideas as to how people will think about this.

This is just another financial use case. This product was available, not available now, where you kind of took it and put it back on our shelves and allows someone to casually do data sampling. If I were to look at this type of logic between these periods of time, what could I infer? What historical data will be presented? You could run thousands of these algorithms at the same time, technically. Again, we're taking this idea of a calendar and we're taking it to automation, taking it to big data analytics. We're not focusing on just black box AI or anything. It's really about transparent explainability of things that I believe, things that I see, things that I feel, and being curious about them and thinking if they have any value.

I can go to a real demo. We don't really focus on government, but we do from the perspective of focusing on smart cities. We have a pure smart city focus, cities that want to basically uncover how does traffic, how does weather, how is it impacting in their city and how are events that they may be aware or not aware of impacted us. Part of my team, I have a background as a trader. Some of our team comes from names like SAP, et cetera, et cetera. I'll give you a demo, quickly, of how it works. Any questions so far?

Speaker 2: Would you show us the API?

Benoît: Huh?

Speaker 2: Would you show us the API?

Benoît: Yeah, I can show you he API. Let's see. Let's see. How do I do it? I hope I'm not talking too fast. I'll show you the gooey version of API. I wish I could see this. Can I slide this?

Essentially, you have a calendar here. Again, this is just a sample. This is just a small data sample, but I'll scroll to the bottom to keep it simple. You have variety of things taking place. On what day was that? On the 29th of September, it was kind of scrolling through and you can see things like the Russian Grand Prix, the Berlin Marathon, a variety of things. Let's just click on the Berlin marathon. Under Berlin Marathon, you can basically see that it one of one listed here. That was scraped. Let's see if I can find something with a lot more history.

This is a demo, I didn't test these examples before I made them. This is the FIA Formula Grand 3 (End). Oh wow, only one of those two, that's how funny. I'll go back in time and find something interesting. I'll go back to October 31st, 2017. I'm scrolling really quickly, but you have variety of types of activities happening. You have unemployment rate being reported in Japan, it's Halloween, obviously. These are candlestick patterns and SCC filings and we clustered them together. For this example, you get a quarterly profit increase of this particular stock ticker, you had things like a crypto products were releasing certain versions, you had a car ramming accident in New York city, unfortunately. Let's see, it keep strolling... Astrological events for max.

I'll show you what you can do with this and how you could use the API. Let me find one good one, for instance. Let's just say the SEMA show, it's an auto show based in Las Vegas and it runs every couple of years. This is just three examples, you can see that this was scraped from a summer.org and so what you can do in this, this is user API, you can add in, essentially, a date.

Manually, if you're doing your own research, you want to add in earnings releases a calendar, et cetera. I can put it in today's date. I can just do test and then I can source, I'm just going to do dcapi.org. This is not signing into the blockchain in this example, but what you're doing is you use API to store simply a time series data set of date time. You're able to see it in full chronology. This is all the start days of the auto show. Let's run this and we can use the API to ask a question to it. I'm going to put in SEMA auto show and I'm going to look at Ford Motors, trading in New York, but they also trade in London, et cetera, but we're going to use it here. We're not going to be too fancy, and we're just going to do an analytic, where we're doing an analytic, basically purchasing at the closing price on the first date of the event and we're going to sell it at the closing price following the event.

What we're doing is we're exploring this data exhaustively, although there's only three examples of this. You can basically see that we've turned this into plain text. This is a five day trade for the auto show, considering the last occurrences and latest being in 2018, entering zero days before the event. This pattern has an average gain loss, et cetera. What it's doing is allowing you to explore your hypothesis about how some event may or may not correlate based on historical activities.

You can see 2017 it dropped, 2018 it rallied, and you can see what happens in between. Starting one day, going into the future, we currently have the setting on five days, but I can use a slider to explore that. If it's 30 days, here's how it works. One example is you can search, so the database is pretty cool.

I'll show you one thing that people take for granted. One other tool we built, it's called the Almanac. It basically is a screener, so you can essentially search all the different types of events from Saudi oil discoveries to Kentucky Derby, et cetera, and you can do a massive scan using our API. Let's look up July 4th and let's just purchase at the closing price of the first day following July 4th, and we're going to hold for arbitrarily two days just to give an example of the API. Again, this can be any time series data set in a business, and we're just simply going to run that across all U.S. Stocks, so S&P 100, this is my last example, no Hong Kong, no Crypto, no Forex.

What you can do in one second is basically take any real world concept and event and you can basically screen across all of the markets in seconds and get a result of all the securities or variables in your business, or factors or employees, or whatever these time series might be. You get sort by tops and tails, you can see Walmart, it's in there for July 4th, you can see Nike is in there for July 4th, and you can explore these patterns.

Anyway, the idea is that you can turn a calendar into an API you can finally use. We're presenting this as a concept for other types of businesses that might be relevant to government for market surveillance or for other types of reasons. Come and talk to me if you have any questions or ideas. Thank you.

Content type group: 
Articles
Top Match Search Text: 
From the DC-Area API Meetup: How an Alternative Data API Can Be Used To Improve Predictive Analysis

Simple JSON Blob Storage

$
0
0
API Endpoint: 
HTTPS://jsonapi.gunterweb.ca/api
API Description: 
Secure JSON blob storage API
How is this API different ?: 
This is the only service offering an API with security. This protects your blobs from being read by public users.
SSL Support: 
Yes
Interactive Console URL: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Pre-release
Direction Selection: 
Provider to Consumer

February's DC-Area API Meetup to Feature Talks on Artificial Intelligence and APIs 101 (Part 2)

$
0
0
Super Short Hed: 
February's DC-Area API Meetup to Feature Talks on Artificial Intelligence and APIs 101 (Part 2)
Featured Graphic: 
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Featured: 
Yes
Summary: 
It's almost February and do you know what that means? It’s time for another monthly API meetup in Washington, DC! The February edition of the meetup is Tuesday, Feb 4, 2020 and will feature two talks; one on succeeding with APIs for artificial intelligence and the other is an APIs 101 talk.

It's that time of the month again.... time for another API Meetup in Washington, DC.  Since January's meetup was canceled due to snow (basically, all of Washington, DC shut down for the occasion), we will be running with the same agenda (same speakers, same topics, same pizza, same beer and other beverages) for the February edition on Tuesday, Feb 4, 2020. So, with apologies, what follows below is a massively plagiarized version of the summary that I wrote for last month's intended meetup. But if you're looking for coverage and videos from the previous DC meetups, then we've got you covered

I will be hosting this meetup at the offices of the digital agency U.Group.  

So far, in addition to the free pizza and drinks that will be provided to you compliments of sponsorships by the DC-based technology consultancies 540.co and U.Group (respectively), we have two great presentations lined up. One of these is being delivered by CapitalOne's Director of Platform Services Matthew Reinbold who is giving a talk on how success with artificial intelligence depends on APIs. 

According to Reinbold, AI has tremendous business potential to turn big data into sustained competitive advantage and thusly, harnessing AI insight is high on many companies' 2020 strategic plans. However, developing the technical foundation necessary to support a successful, secure, and *ethical* program requires API expertise. In his presentation, Reinbold will share his experience on the challenges that go with the newer AI approaches and how those challenges can be overcome with a sound API approach.

For the second presentation, I will be delivering Part 2 of my ongoing API 101 “college course.” Don't worry! If you missed Part 1, there's still time to catch-up via the delayed broadcast! With few exceptions, these parts will be ongoing each month nearly indefinitely. In this part, I will pick up where my January presentation left off, and describe the many advantages that accrue to organizations as a result of what's known as the API's technical contract (described and defined in part 1). This part will get into the business advantages of modernizing an organization’s legacy IT, moving more towards a microservices-driven API-led infrastructure and how the technical contract behind an API fuels numerous efficiencies that can, directly and indirectly, impact the bottom line in a positive way.

And then, who knows? We’ve been known to add presentations at the last minute! 

If you live in the Washington, DC area (or will be in the neighborhood on Feb 4, 2020) and want to rub shoulders with other members of the local API community, then this is the meetup to come to. So, I hope to see you there!

Finally, as always, we are very grateful to the meetup’s enduring sponsors for making our monthly gatherings possible; the U.Group for providing the beverages and the venue and then 540.coGithub, and MuleSoft who take turns buying the pizza. If you are interested in becoming a sponsor of the meetup, feel free to reach out to me at david.berlind@programmableweb.com

Content type group: 
Articles
Top Match Search Text: 
February's DC-Area API Meetup to Feature Talks on Artificial Intelligence and APIs 101 (Part 2)

How to Craft a Command Line Experience that Developers Love

$
0
0
Super Short Hed: 
How To Craft a Command Line Experience that Developers Love
Featured Graphic: 
How To Craft a Command Line Experience that Developers Love
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Contributed Content: 
Yes
Related Companies: 
Summary: 
If you're trying to build a highly usable developer tool, then a proper Command Line Interface (CLI) to interface with your API is paramount. This article outlines what we found to be best practices among other CLI tools and developers' needs when it comes to building a proper CLI.

If you're setting out to build a highly usable developer tool, it goes without saying that a proper Command Line Interface (CLI) to interface with your API is paramount. As Zeit and Heroku have been setting the tone for these types of developer tools by doing extensive research into best practices when it comes to a command line "experience", we started our quest by digging into their findings.

Since the Stream CLI is currently in public beta, the methods and philosophies we found from our research, as well as those we unearthed ourselves are fresh in our minds and we wanted to take a few minutes to outline what we found to be best practices among other CLI tools and developers' needs when it comes to building a proper CLI.

Below is a step by step explanation of how we would go about building another CLI and some explanations about why we chose to do things the way we did.

Options

A good number of open-source projects have arisen to help facilitate the scaffolding and overall development of a CLI.

Aside from our backend infrastructure here at Stream, which is written primarily in Go we use JavaScript for many of our tools – its flexibility between front and backend projects, the large number of open-source contributions to it, its overall global presence and its ease of use (for some of the aforementioned reasons) all make it an obvious choice for creating a powerful tool with a low barrier to entry.

Likewise, if you're setting out on an adventure to build a CLI, there will be dozens of open-source projects built with JavaScript that are available to help you get started. To be fair, when we started looking into building a CLI, Commander and Corporal were hitting the top of Google and npm on nearly every search, but we wanted something more robust – a battle-tested project that provided everything we needed in one go, rather than a package that simply parsed arguments and passed them along with a command.

That's when we found Oclif.

Oclif

Oclif is a JavaScript-based CLI framework that was open-sourced by the team behind Heroku. It comes packed with pre-built functionality and even offers extendability through the use of plugins.

At a glance, there were a few major features that stuck out when we were looking into Oclif:

  • Multi-command support
  • Auto-parsing of command arguments and/or flags
  • Configuration support
  • Auto documenting codebase

Ultimately, the availability of these features was the primary reason why we chose to move forward with using Oclif as the base for our CLI tool here at Stream.

Remember, these are just some of the built-in features that Oclif ships with out of the box. For a comprehensive list of options, we recommend taking a look at the official Oclif docs here.

Multi-Command Support vs. Single-Command Support

It's important to note that, if you have a single endpoint or method you're calling, single-command (e.g. grep) support is all that you'll need. If you're developing a larger CLI tool, such as the one we created for Stream, you'll likely need to opt-in for multi-command support (e.g. npm or git). Here's a quick breakdown of the difference:

Single:

$ stream --api_key=foo --api_secret=bar --name=baz --email=qux

Multi:

$ stream config:set --api_key=foo --api_secret=bar --name=baz --email=qux

While they may look similar, there is one key difference between the two options: single command does not allow for subcommands or "scoping" as we like to call it. This simply means that complicated or nested commands are not made possible with single command support.

Both types of commands take arguments, regardless of the configuration. Without arguments, it wouldn't be a CLI. One advantage to multi-command support is that it delimits the calls with a colon (e.g. ":"), allowing you to keep things organized. Better yet, you can organize your directory structure using nested directories as shown in the src code on GitHub.

It can be a bit difficult to conceptualize in the beginning, however, once you get your hands dirty creating a CLI for the first time, it'll all come together and make sense.

Auto Parsing

Under the hood, Oclif handles parsing command line arguments that are passed in. Generally, with Node.js, you'd have to pull arguments out of the array provided by process.argv. Although this isn't particularly difficult, it's definitely error-prone… especially when you toss in requirements for validations or casting to strings/booleans.

If you're not planning on using Oclif to handle the parsing for you and just need to move forward with a simple setup, we would recommend minimist, a package dedicated to argument parsing in the command line.

Configuration Support

With any server-side integration (whether it's an API or SDK), you'll (hopefully) likely have to provide a token of some sort (for security and identity reasons).

For our integration, we needed to persist the configuration credentials for a user (e.g. API key & secret, name, and email) in a secure location on the user's computer. Without persisting this type of data, we would have to make sure that every API call to Stream included the proper credentials and, let's face it, nobody wants to pass arguments with every command.

To get around this issue, we leverage Oclif's built-in support for managing configuration files by storing user credentials in a config.js file within the config directory on their machine. Typically the config directory resides in ~/.config/stream-cli on Unix machines or %LOCALAPPDATA%\stream-cli on Windows machines. With the help of Oclif, we don't have to worry about detecting the user's machine type as they take care of this distinction under the hood, making it easy to get within the class of your command using this.config.configDir.

Knowing this, we were able to create a small utility to collect and store the necessary credentials using the fs-extrapackage. Have a look at the code here.

Docs for configuration options within Oclif can be found here.

Auto Documenting Codebase

We were very happy (and surprised) to find that Oclif supports auto-documenting commands. Without this sort of functionality, we would have to manually change our README and underlying docs every time we made a change such as adding/removing a command argument, changing a command name or modifying the directory structure within our commands subdirectory. You can probably imagine how difficult this would be to maintain within a large CLI project like the Stream CLI.

With the help of the @oclif/dev-cli package, we were able to add a single script to our package.json file that is run during the build process. The command scans the directory structure and magically generates docs, as shown here.

Interactive & Raw Argument Support

Sometimes, when calling a command via a CLI tool, one of the last things you may have taking up space in your brain is all of the required arguments for that command, especially if there is a large number of them. While you can always use the --help flag to print out required arguments, sometimes it's best to provide an interactive prompt that asks the user for various information if it's missing from the provided flags.

For example, rather than calling:

$ stream config:set --api_key=foo --api_secret=bar --name=baz --email=qux

The user can call (with zero arguments passed):

$ stream config:set

And they will be prompted with this:

Interactive and Raw Argument Support

There are several options for prompting users and we've found Enquirer to be the easiest package to work with. Although this package is similar in functionality to Inquirer, the Enquirer API tends to be a bit more forgiving and easier to work with.

It's important to try to apply this prompt style functionality on all of your multi-argument commands, if possible. However, make sure to check the flags to ensure that you're not prompting the user for information if they've already passed it. For example:

https://gist.github.com/nparsons08/06236f806c61c6233f52dc0f67134693

Note how we check the flags and display the prompt ONLY if the flags do not exist.

Make it Pretty

Command lines are generally thought of as bland green and white text on a black background. News flash: there's not actually anything stopping you from making your CLI stand out. In fact, developers love when colors are introduced to the command line – colors help differentiate errors vs. success messages, events/timestamps and more.

If you want to make things pretty, Chalk is a great (if not the best) package to use. It provides an extensive API for adding colors to your CLI with little to no overhead. To integrate Chalk into your CLI:

import chalk from 'chalk';

Then, wrap your string with the chalk method, color, and optional styling (bold, italics, etc.) to add some flare to your output:

this.log(`This is a response and it's ${chalk.blue.bold.italic('bold, blue, and italicized')}`);

Use Tables for Large Responses

Let's face it, no developer wants to comb through a large response returned by your API. With that being the case, it is important to always return something meaningful and easy-to-read. One of our favorite ways to provide the user with an easily digestible output is to use a table:

One of our favorite ways to provide the user with an easily digestible output is to use a table

In the example above, we chose the cli-table package to help display data in tables, as it provides an easy-to-use and flexible API that supports the following:

  • Vertical and horizontal displays
  • Text/background color support
  • Text alignment (left, center, right) with padding
  • Custom column width support
  • Auto truncation based on predefined width

Printing JSON for Parsing with Bash & JQ

The beauty of providing a CLI is that it can be called either by a user or by a script. Part of creating a highly approachable and usable tool is defaulting to communication that immediately makes sense to the user. With that said, scripting allows for a hands-off approach, which is especially helpful when the user would like to run a set of commands rather than firing off one-off commands.

While the Stream CLI defaults to returning user-friendly (and human-readable) outputs (see Make It Pretty and Use Tables for Large Responses), we understand that, when running a script, you will likely want a verbose response instead of a human-readable message. To access the raw response data, we added a --json flag that allows the user to specify the raw payload as JSON for the response output.

Below is a quick example showing how to fetch credentials for a user from the Stream CLI, piping the output directly to JQ, a lightweight and flexible command-line JSON processor:

https://gist.github.com/nparsons08/693047bfae6fc14c757222e7ff933aa4

We found that providing this functionality is especially useful for Stream Chat, should the user want to set up their chat infrastructure, provision users and permissions, etc. in one go without using the underlying REST API.

Publishing

Publishing a CLI may seem daunting, however, it's no different than publishing any other package on npm. The basic steps are as follows:

  1. Update the ioclif.manifest.json file using tools provided by the i@oclif/dev-cli package. This file scans the directory and updates the manifest file with the updated version of the CLI, along with all commands that are available to the user. The manifest file can then be updated by calling irm -f oclif.manifest.json && oclif-dev manifest from your command line.
  2. Update the docs to reflect any changes made to the commands. This is also a tool provided by the i@oclif/dev-cli package and can be run using oiclif-dev readme --multi (or i--single if you're running a single-command CLI).
  3. Bump the npm version using the version command (e.g. npm version prerelease). The full docs on the npm version command can be found here.
  4. Publish the release to npm with the npm publish command (e.g. npm publish).

A user can then install the CLI globally with npm or yarn:

npm -g install <YOUR_CLI_PACKAGE>

OR

yarn global add <YOUR_CLI_PACKAGE>

If you need to distribute your CLI as a tarball, we recommend looking at the oclif-dev pack command provided by the @oclif/dev-cli package – this command will allow you to deploy packages to Homebrew and other OS-specific package managers, or simply run them independently on the system.

Key Takeaways

If you'd like to dig into the full source code behind the Stream CLI, you can find the open-source GitHub repo here. While the key takeaways in this post are not an exhaustive list of our suggestions for best practices, we do hope that you walk away from this post with some additional knowledge to apply to your CLI. To summarize our main takeaways from this endeavor:

  • For inspiration, look at the functionality that Zeit and Heroku provide within their CLI to make for an awesome developer command line "experience".
  • If your API/CLI requires data persistence, store that data in a cache directory that is specific to your CLI. Load this using a util file as we do at Stream. Also, note that the fs-extra package will come in handy for this type of thing (even though support is built into Oclif).
  • Oclif is the way to go, especially if you're building a large CLI, as opposed to a single-command CLI. If you're building a single-command CLI you can still use Oclif, just make sure to specify that it's a single-command API when you're scaffolding your CLI.
  • Don't want to use a framework? That's okay! The package minimist provides argument parsing in the command line and can easily be used within your project.
  • Use prompts, when you can, with Enquirer or another package of your choosing. Users should be walked through the requirements of the command and asked for the data the command needs in order to execute properly. Note that this should never be required (e.g. let the user bypass the prompt if they pass the correct arguments).
  • Use colors when possible to make your CLI a little easier on the eye. Chalk serves as a great tool for this.
  • If you have response data that is well enough structured, don't just print it out to the user (unless that's what they specify). Instead, drop it in a table using cli-table.
  • Always allow the user to specify the output type (e.g. JSON), but default to a message that is human-readable.
  • Keep it fast! For time-consuming tasks such as file uploads or commands that require multiple API calls, we recommend showing a loading indicator to let the user know that work is being done in the background. If you're looking for a package on npm, we recommend checking out ora.

As always, we'd love to hear your thoughts and opinions, as well, so please feel free to drop them in the comments below!

If you're interested in building a chat product on top of the Stream platform, we recommend running through our interactive tutorial. For the full docs on the Stream Chat API, you can see them here.

Content type group: 
Articles
Top Match Search Text: 
How To Craft a Command Line Experience that Developers Love

Twilio Study Reveals Erosion Of Consumer Trust in Telephone Infrastucture Due to Robocalls

$
0
0
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Summary: 
Twilio has released its annual State of Customer Engagement Report and to find out more about the report and the trends that it spotlights, ProgrammableWeb's editor-in-chief David Berlind captured a video interview with Al Cook, the company's Vice President and General Manager of AI
Twilio is by all accounts one of the darlings of the API economy. Prior to Twilio’s existence, if a developer wanted to programmatically send SMS messages to hundreds of cell phones all at once, that developer’s software had to independently (and arduously) interact with the wireless carriers associated with each of those phones. 
 
Then Twilio came along and exemplified the idea of developer productivity when it offered a single API through which SMS messages could be sent across multiple carrier networks without requiring the developer to learn the specifics of each network. It was such a boon to developer productivity and verticals like customer relationship management that it sent Twilio into the stratosphere. Today, over a decade later, the Twilio infrastructure powers over 64 billion human interactions per day.
 
Once you’re embedded into the global messaging infrastructure the way Twilio is, you have access to insights and data that help to improve that infrastructure. The company recently announced the results of one such study -- its 2020 State of Customer Engagement Report -- revealing some important trends. For example, trust in that infrastructure is becoming a major issue. As we as users get bombarded with calls and text messages from falsified origins, we start to tune-out the telephone as a source of information — ignoring calls and text messages. 
 
According to Twilio VP and GM of Artificial Intelligence Al Cook (video, audio, and full-text transcript embedded below), "Last year, there were 58.5 billion robocalls in the U.S., which is just a staggering number. You think about how many that is per person per day. It is a truly staggering number. Robocalls kill trust in the phone. And we've been working every day to build new systems that can help people kind of regain the trust of their phone.”
 
The report from Twilio go so far as to boast that 2020 will be the year that robocalls are conquered. When confronted with that prediction, Cook clarified that progress will be made and that in the next 12 to 18 months, the number of robocalls will be "greatly diminished.” Cook cited the STIR/SHAKEN protocol — a protocol that Twilio will support — as a major contributing factor to the war on robocalls. 
 
Another trend the Twilio study spotted is the need for improved and extended customer engagement with the aid of artificial intelligence. For example, all of us have encountered the obligatory disclaimer: “this call may be monitored or recorded for quality assurance purposes.” In reality however, most such recordings disappear into an audio archive never to be heard again. And along with that disappearance, thousands of customer experiences that should otherwise have been surfaced to decision makers are lost too. But properly trained AI algorithms can, at scale, find those needles in the haystacks and more readily escalate and remedy problematic customer engagements in the name of better customer experience and retention.
 
To hear more about Twilio’s findings on other fronts (including politics) and the company’s plans to act on them, be sure to watch the video below, listen to the audio, or read the full text transcript of the interview with Cook.

Video Interview with Al Cook, VP and General Manager of Artificial Intelligence at Twilio

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Audio-Only Podcast of Interview with Al Cook

Editor's note:ProgrammableWeb has started a podcast called ProgrammableWeb's Developers Rock Podcast. To subscribe to the podcast with an iPhone, go to ProgrammableWeb's iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.

Tune into the ProgrammableWeb Radio Podcast on Google Play Music  Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Full-Text Transcript: Al Cook, VP and General Manager of Artificial Intelligence at Twilio

David Berlind: Hi, I'm David Berlind. Today is Wednesday, February 12th and this is another edition of ProgrammableWeb's Developers Rock Podcast. We love developers and today we're talking to one of the darlings of the API economy. They make APIs that a lot of developers love. It's Twilio. You've heard of them, I'm sure. If you haven't, we're going to catch up on what they're all about. With me today is Al Cook. Al is the VP and General Manager of Artificial Intelligence at Twilio. Al, thanks for joining us today.

Al Cook: Thanks, David. Great to be with you.

David: Oh, it's great to have you. We haven't had you on the podcast before, so this will be a first time. Looking forward to it. For those people who've been living under a rock, don't know who Twilio is, tell us what Twilio does.

Al: Yeah, sure. So, Twilio is a cloud communications platform. You can use Twilio, whether you're a developer looking to integrate communications into your application, looking to add text messaging or phone calling into your flows, or whether you're an enterprise looking to deploy a flexible contact center or solve any of your customer engagement problems. Folks come to Twilio to be able to really tailor communications with their customers so you can build the exact experience and really get a great customer experience out of that.

David: I was reading some of the notes coming into this interview. You guys do like an extraordinary number of API transactions per day. What's that number?

Al: That's right. Overall, annually we power 800 billion human interactions every year and that's across 170,000 different customers who are using the platform for one type of customer engagement problem or another.

David: Yeah, that's amazing. I think I read that it amounts to something like 64 billion transactions per day. That's unbelievable! You must have a crazy scalable infrastructure in order to support all of that.

Al: Yeah. We've been building this for over 10 years and we're honored to be able to be part of pairing those communications that our customers companies are building on top of us. The volume is just testament to how important it is for folks to be able to really engage their customers in the right way.

David: What's a really good example of what one company is doing with your APIs?

Al: Yeah, the interesting thing about Twilio is there's all sorts of different use cases, whether people are using it for text messaging, voice calling, video calling, two-factor authentication or whether they're deploying a full scale contact center.

Al: So for example, Lyft had deployed Flex, which is our application platform, which is a contact center that allows you to deploy it out of the box, but you can customize and tailor anything that you want about it. So, you can really integrate the front end user experience, the agent experience, into your CRM, into all of the different backend systems that you need to be able to use as a contact center agent, and also customize all of the routing and all of the backend logic as well. And so, folks from Lyft and Shopify and all sorts of different, both digital native companies and large enterprises have been using the Twilio sector to really build the experience they want.

David: You just said something that — want to stop you there because we're using that same phrase in an upcoming report about all the different business models that are out there for the API economy. You said digital native. What do you mean by digital native?

Al: Yeah. Some companies grew up in a kind of a different era and they've been going through a sort of digital transformation and used Twilio to power that digital transformation and to move, to be able to accept the omni-channel communications and to be able to talk and move, to be able to engage in the way customers want.

Al: Other folks, have kind of grown up in this economy and have always been built to be on the very forefront of technology and using everything in their tool set. We work with folks on both sides of that spectrum and everything in between. But digital natives, folks who've really grown up in this economy have a very, very rapid way of adopting technology and are able to really, really use that to their advantage to disrupt industries, and move fast and make a big difference.

David: Yeah, they have a distinct advantage. And so a lot of those companies that are going through the digital transformation, they kind of aspire to move into a digital native state—

Al: Right.

David: ... much like the people who are trying to disrupt them. Okay. You guys just released a survey. It's got some fairly significant findings in it. You've also made some predictions. So, what is the survey that you've released?

Al: Yeah, the survey is about the state of communications. As Twilio, we have this really interesting ability to survey the landscape because so many folks are coming to us from so many different angles and saying, "Hey, help me solve my problem. Help me engage my customers in a better way. Help me improve this flow." And so we get to see these bright spots of what is going on in the industry at large.

Al: And so, through the survey, we've covered trends like businesses really, really increasingly need to engage in long term conversations with their customers, not just the sort of transactional interactions, but long term conversations that last and really promote the value around the lifetime customer value. We also see a lot of trust issues that we've been working with the industry to resolve. So, things like robocalling has made a huge impact on, particularly here in the U.S. Last year, there were 58.5 billion robocalls in the U.S., which is just a staggering number. You think about how many that is per person per day. It is a truly staggering number. Robocalls kill trust in the phone and we've been working every day to build new systems that can help people kind of regain the trust of their phone.

David: I agree, by the way. I mean, I read in the report that 2019 was the year that people stopped answering the telephone and I can't argue with that. Here in my household, we literally let the landline just ring all the time. We never bother to answer it anymore. Sometimes, you look at the display to try to see who it is, but it's very hard to tell and half the time when you pick up the phone because you think it might be a local number, they've tricked you. They somehow constantly reprogram their systems and make sure they're calling local numbers from local numbers, so it looks like they're somebody from around the corner or the school or something like that, and it turns out to be one of these robocalls. So, I completely distrust the public telephone system right now.

Al: Right, right. We've been working to improve that, and working on systems whereby enterprises can authenticate their identity and working with partners to be able to show on your phone when you answer that call, that you know it really is from a call that you want to take.

David: Yeah. I read about this thing called SHAKEN/STIR. What is that?

Al: Yeah, SHAKEN/STIR, it's actually a federally mandated thing that companies have to adopt to be able to really authenticate and identify an endpoint and that's a big part of the problem. But also getting the identity down to the user's actual devices is part of that as well. We're working on solving the entire problem end to end. And then I think the other part is as folks look to engage in different channels and different methods, people need to be there on the channel that the customer, that the user, wants to be engaged on.

And we talked a bit about that's important for business, but we also see that being important in politics as well. We power a lot of political communications on our platform and we see that politics are moving beyond the polls and things like being able to text message your constituents, being able to engage, again, in a long term and a meaningful dialogue is really, really important and we're helping to help political parties and political candidates reach people in the way that they want to be reached as well.

David: Yeah, I saw that one of the big trends was the fact that now the engagement cycle in politics is longer. In fact, we're constantly engaged. Once an election's over, we're re-engaged almost immediately right up to the next election, whether it's two or four years away. And so, I can imagine, especially given what you just said about the distrust over robocalls, the different channels of communication that are available to us, text messaging, mobile applications, our landlines, on the web, et cetera, I'm assuming that as users or most users would like to establish some sort of preferred channel of communication and then keep it there so that they're not getting overwhelmed on these other channels by the same company or organization. Is that a part of your—

Al: Right. Yeah, I think that's definitely true. People have their preferred channels and we see this all the time, that customer experience is increasingly a differentiator and the companies who are doing great at this are folks who are really reimagining the journey from the ground up from a customer perspective and thinking about things like well, what channel do they want to be contacted on? What is the right way and what is the right method of communicating exactly the right information at exactly the right time? And these are the kinds of things—

David: I went through this just yesterday. I was in a retailer store, I won't mention who they were, and I went to check out and it asked me at the checkout counter, the credit card machine asked me if I wanted to receive offers of specials over my mobile phone via text. But I already have that retailer's application on my phone and I was surprised that I was being asked this when I already have their mobile app and they could actually communicate with me that way as well. So, there's probably a bit of a challenge on the back end to create this sort of 360 degree view of a customer in a way that you know what channels they have available to them, whether or not you can use those channels or not and then which one's the priority.

Al: Right. Well, David, that is exactly the problem when you're deploying a sort of siloed SaaS application that just does what it was intended to do and nothing else because you can't integrate it with any of your other systems. And so you might end up with piece parts that seem like that makes sense on their own, but you can't tie the whole thing together to make a cohesive customer experience that is really to your point behaving how you want to do and kind of making sense together. And it's only when you have developer APIs and when you can really tweak and change and fine tune the behavior, that you can actually get the experience that you want. You have to be able to customize things.

David: Now, you're the GM and VP of AI, so tell me how artificial intelligence fits into the trends that you've spotted and some of your conclusions.

Al: Yeah, our trend was around conversational AI and really what we see is conversational AI, we often talk about conversational AI. Folks who hear conversational AI might think about things like a voice assistant, like an Alexa or Google Home or they might think about messaging bots on a website where you can type in and try and get help from a bot.

But really what we see is this is just a tip of the iceberg of what conversational AI can do. I think you see the early deployments of these, kind of these two main categories of things, because that happened to be a good place for folks to get started. It's been a good place for folks to experiment.

But the real power of conversational AI, when you can truly talk to a computer or type to a computer and it absolutely understands what you're saying, what you're doing, I don't think we've even begun to unlock the potential of that. And it doesn't, by the way, have to be just talking to a voice assistant or talking to a computer. It can be that computer is sort of in the loop on a human-to-human interaction, and then being able to extract more value out of that. Whether that's, for example, in a contact center, a use case, being able to help the agent and guide the agent in real time because you're able to understand how the conversation is going or being able to extract insights out of what's going back and forth. And those are the areas that I'm really interested in.

David: Well, you are the VP and GM of AI at Twilio. So, given all of those challenges, it sounds like there's a lot of opportunity, but there's some challenges. What is Twilio going to do about it? What do you have coming that's going to help businesses wrestle all of these opportunities to the ground and take advantage of them?

Al: Yeah, We've been working for a few years now in this space and we have a product, Autopilot, which is a self-service messaging bots framework where you can build conversational IVRs, you can build bots on them. I think one of the things that we've seen there is when conversational AI is deployed well, it's not deployed in a siloed standalone system. Often when you're talking to one of these systems today, you often find that consumers don't trust or don't believe that they're going to be able to get out of this interaction what they're looking for. And very often, that's because, either it's because not kind of true AI and it's just picking up keywords and you may be able to, it's sort of a glorified FAQ tool, but you can't actually get it to do a thing for you. Or it's because it's just not deployed and it's not integrated into a way and so you end up with a system where you're talking to a bot and then the bot gets to the point where it can't take you any further and then you're kind of ejected out of that experience and then into a sort of traditional human experience. That kind of forced eject is not a good experience for anyone really.

David: I think we've all been through that where you're going through the IVR system, you're hitting buttons and suddenly you hit the end of the road and you're like, "How do I get a human because this thing just died on me here?"

Al: Right, right.

David: Yeah.

Al: Folks who have been building on Twilio have been able to build a better experience in that whereby by you can have agents monitoring a whole bunch of bots interactions and being pulled in and out as they're needed and being able to kind of supervise those interactions and using the bot to sort of superpower the human rather than using it as a firewall in the front. And I think that makes a big difference from customer experience.

So, as we are working on that space, we've invested a huge amount in natural language understanding and being able to really analyze and understand a conversation. And as I was saying, I think being able to deploy that technology in other use cases becomes really, really interesting, right? So, take your contact center, for example. A contact center is this treasure trove of information that is typically not used very well. If you think every single one of your customers who ever calls you or ever messages you on their website, what they are saying about your products, about your company, about what they're struggling with, about what they're hoping to do, that sits in that contact center and very often that's where it remains, right? It never comes out of that.

And you think about, rewind, I don't know, 10 years ago or something and you think about the power that Google Analytics had in helping companies understand the pipeline of their prospects and like how do people navigate a website, and you could derive a lot of meaning from that. Well, take that and apply it to what your customers are actually saying to you in your contact center.

David: Yep.

Al: That's even more detailed information. If you can analyze out and say, "Well, here are the reasons why people were frustrated, why they churned, why they... This is the set of events that need to happen for people to upsell." That is a hugely powerful business analytics source of information that really, until conversational AI has got to the point it is at now, it's been very, very hard to do anything with that, right? So, typically a contact center might review 1% or 2% of the calls, but you can do so much more with that if you can analyze it 100% automatically.

David: Right. And I completely agree. I mean, who of us has not made a call to a contact center and suddenly you realize things aren't going so well. You start complaining to the agent that you're talking to. You know that the call might be recorded because they say so when you first make the call, but you're thinking in the back of your mind, "This is never going to get to the people who have to change this. And I'm about to leave this company as a customer because of the poor customer service," or something like that. You really have no faith that the conversation you're having with that agent is actually going to get to somebody who can do something about it.

Al: Right.

David: So, and I can understand from the company's point of view, they're frustrated. They need to be able to surface the most important feedback that's coming through that channel and then be able to act on it in a way. And AI clearly, especially when it's scalable, can sift through that haystack looking for the needles and get them to the right people at the right time.

I want to come back to the predictions. You guys made four predictions as a result of the survey. Can you quickly go through those?

Al: Yeah, so the predictions are that conversations with businesses will be increasingly important. That conversational AI, we're just at the tip of the iceberg, that robocalls will kill trust in the phone and folks need to work on improving that, that politics moves beyond the polls and candidates need to engage in a thoughtful way and that customer experience is increasingly important as a differentiator.

David: Yeah. The report said that robocalls will be conquered in 2020. Are you confident about that? Are we there?

Al: I think we are making some huge progress as an industry with things like STIR/SHAKEN as we were talking about. And that number, 58.5 billion robocalls, I think that is a number that we can make a material difference on in 12 to 18 months and we are working incredibly hard and I think we'll see them very significantly diminished. Will they go down to zero? That's a different question, but will they be significantly diminished? I think we can make a big, big impact on that.

David: Well, I pray that you're right. And for developers, because this is, after all, the Developers Rock Podcast, what is your advice to developers here given all these trends? I mean there must be some developers out there thinking, "Hmm, you know what, this whole thing about customer experience is a big deal. I should get smart about that. And when I'm out there building applications for my clients, let's say, or for the company that I work with, I should kind of take a little more ownership of the customer experience side of things to make that organization more successful at what they do." Is that kind of one of your current efforts to get developers educated on these issues?

Al: Yeah. We spend a lot of time doing design sessions with development teams within our companies. One of the things that I think is interesting across all of these trends is the need to really, really redesign customer engagement players almost from the ground up and really think about how do you build the right experience.

Take conversational AI, for example. There's a lot of thought that goes into how do we want our voice to come across as a brand? How do we want this kind of flow? Or how do we, what kind of feeling do we want out of this? It's much more than just a technology and so the interactions that I enjoy the most are where we have developers and designers sitting in a room together and really thinking about how do we design this from the ground up without being kind of shackled by the kind of traditional legacy systems where you can kind of hammer the thing into submission and make it do what you want.

David: A lot of points of view have to be brought into that conversation. People who know something about artificial intelligence and what it's capable of, people who understand the customer experience and, of course developers, because at the end of the day, it's their job to string it all together into something that's really frictionless, right?

Al: Right. Yep. That's exactly right.

David: Okay. Well, thank you very much for joining me today.

Al: Yeah, thank you. It was great talking with you. I really enjoyed it.

David: It was great to have you. We've been speaking with Al Cook. He's the Vice President and General Manager of Artificial Intelligence at Twilio, one of the darlings of the API economy.

When you see this podcast, you may also find the full text transcript on ProgrammableWeb. Just search Twilio in our search box. It'll probably lead you there and please come back to both our YouTube channel at youtube.com/programmableweb for more interviews like this one, and please, of course, come to ProgrammableWeb.com where you'll find all of the text as well as the audio where you can just listen to the podcast on your iPhone or your Android device. Thanks for joining us. We'll see you next time.

Content type group: 
Articles

Apollo GraphQL Co-Founder Geoff Schmidt Discusses Federation of The Graph (video)

$
0
0
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Includes Audio Embed: 
Yes
Related Platform / Languages: 
Product: 
Summary: 
In this episode of ProgrammableWeb's Developers Rock podcast, ProgrammableWeb's David Berlind and Bob Reselman talk GraphQL with Geoff Schmidt, co-founder of Apollo GraphQL; the leading solution provider for standing-up GraphQL-based APIs. Federation of "the graphs" was one of the many topics.

As API architectural patterns go, GraphQL — which is most definitely not RESTful — is the new darling on the block. More and more enterprises from Github to the New York Times to PayPal are working with the technology which was invented at Facebook. 

But similar to the early days of RESTful APIs, it’s a little bit of the wild wild West when it comes to the frontier of GraphQL. Tools do exist. But its not like there’s a flood of them on the market. However, as of the time that this article was published, there is one company that got the early jump and that’s nearly synonymous with the idea of standing up GraphQL APIs; Apollo GraphQL. 

So, when the opportunity came along for ProgrammableWeb to do a podcast interview with Apollo GraphQL co-founder Geoff Schmidt (the video, audio, and full-text transcript appear below), we didn’t hesitate to get him on the docket. We were also sure to include our in-house GraphQL expert Bob Reselman; author of one of the most comprehensive guides to GraphQL that you’ll find on the web.  

In the interview, Schmidt, Reselman, and I spend a fair amount of time talking about what GraphQL is (for those of you who are not familiar with it) and then where Apollo GraphQL fits into the landscape. As can be expected, Schmidt is a huge fan of GraphQL and loves to espouse some of its advantages over REST as an API architectural pattern. He touts GraphQL’s ability to do in API fashion what Structured Query Language (SQL) once did for databases; the ability to respond to a single query with related data that spanned multiple tables. Surprise! They’re both query languages. 

All this said, across the API economy, REST is still the predominant architectural style followed RPC (and in particular, the XML-RPC flavored variants of RPC). But who knows. That may not be for long as more and more companies discover the virtues of GraphQL. 

Before the interview was over however, we talked about language independence (Apollo GraphQL is primarily node.js-based but modules in other languages can be used) and eventually fell onto the topic of GraphQL federation (a new capability for Apollo GraphQL). In a nutshell, GraphQL federation implies that the data that makes up a graph can come from multiple sources, each of which — in very microservices fashion — is administered and managed by different teams or departments. 

To hear about everything Schmidt had to say about GraphQL and Apollo’s approach to helping API providers and developers working with GraphQL, be sure to watch or listen to the podcast below. 

Video Podcast: ProgrammableWeb's David Berlind and Bob Reselman with Apollo GraphQL co-founder Geoff Schmidt

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Audio-Only Version

Editor's note:ProgrammableWeb has started a podcast called ProgrammableWeb's Developers Rock Podcast. To subscribe to the podcast with an iPhone, go to ProgrammableWeb's iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.

Tune into the ProgrammableWeb Radio Podcast on Google Play Music  Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Full Text Transcript of Interview with Apollo GraphQL co-founder Geoff Schmidt

David Berlind: Hi, I'm David Berlind, editor in chief of ProgrammableWeb. Today is February 14th, 2020, Valentine's Day, special day. We love developers here on ProgrammableWeb's Developers Rock podcast, and today we have two very special guests with us. One of them is Geoff Schmidt. He is the CEO and co-founder of Apollo GraphQL, and our other guests is one of our authors, that is Bob Reselman, he writes about GraphQL and other advanced API technologies. Geoff, welcome to the show.

Geoff Schmidt: Well thank you so much for having me. Excited to do this.

David: And Bob, welcome to the show.

Bob Reselman: Hi. Thanks again David for having me, again.

David: Yeah, well we love to have you here Bob, because you're such a great writer for us. For those of you who haven't read some of what Bob writes on ProgrammableWeb, we strongly recommend doing it because he writes all the technical details about things like GraphQL, which is the subject of our call today. Bob, in fact, authored what I believe to be one of the most comprehensive guides to GraphQL for us on ProgrammableWeb. It's easy to find. If you just Google GraphQL on ProgrammableWeb.com you'll find it. So I want to start with you, Geoff. Geoff, you're the co-founder, as I said earlier, and CEO of Apollo GraphQL. What is Apollo GraphQL?

Geoff: Well, Apollo is a way to build a data graph inside your company. To talk about what a data graph is for a second. Now let's think about API technology and where it's coming from and where it's going. The world's gotten a lot more complicated. The apps that we build have gotten a lot more complicated. There was a time when building an app, well you might have a web server, it might talk to a database, you might access that app through a web browser. Pretty simple layout that was easy to understand. Now there's a lot more moving pieces. It might not just be a website. In fact it's probably not, it's probably an iOS app, probably an Android app. You might have a lot of different channels you use to reach your users. It might even be a voice assistant. It might be something IoT related. It might not even be a first-party property. You might want to reach your users through partnerships, integrations. Why should your first-party app be the only way people access your services?

And what's behind those apps is a lot more complicated too. It's not just a web server and a database. You've probably got a bunch of microservices. Increasingly you might have other SaaS APIs you're pulling in, lots of different data sources, multiple clouds. Whereas traditional API technologies like REST and SOAP grew up in a point to point way of thinking, where if I want to talk with you, we dig a ditch and we bury a cable and now we can talk. The data graph is more like a telephone network where you can dial any number and connect to anyone you need to, because if you think about it, you've got many different microservices or many different data sources. You've got many different things that want to consume those services.

So what you need is a more flexible way where instead of having to build a new API endpoint, a new REST endpoint for every use case, every combination of data, every screen in your app, every time you want to fetch a different group of things in a different combination, the data graph gives you GraphQL which is almost like SQL for database, because it's a declarative language for saying... I don't have to write code anymore to fetch a particular combination of data. I can just describe what I want declaratively, I can use GraphQL to express my needs, and then you can have a query plan or a resolution engine that's able to go fetch the data wherever it may be and assemble exactly what you need.

So it means front end developers don't have to ask the backend for new endpoints. It means you can build new products really quickly, new features really quickly. It means your partners, if you have a public API, it means that suddenly those people can build much more rich and complex products. It also means your apps are faster, you're putting less data across the wire, they're more secure because your security isn't dependent on all this handwritten code. So there's a lot of benefits to rethinking the way we think about APIs in this much more connected, many to many world. And that's what the data graph is all about.

David: I want to just stop you there. Obviously you mentioned SQL as former way of doing this, SQL, GraphQL, they both share the QL, the query language bit. There are some similarities there. In the old database world, if we wanted to get a whole bunch of data from multiple connected tables, now what we're talking about is getting a collection of data from multiple connected microservices all at once with one query. That's one of the big advantages of this API technology, GraphQL, is that instead of going out and fetching the data from each of those sources independently and then tying together when you bring it back, you can, with one query, go out, get the data, to the extent that it's connected to each other across all those microservices, with one query and bring back all that data in one fell swoop. Is that correct?

Geoff: Yeah, you've got it exactly right. If you think about it SQL revolutionized databases. Before SQL, your query planner wasn't a piece of software. It was a human being. Every different use case, you'd have to write a bunch of custom codes that joined data from here, joined data from there, and that didn't scale. That didn't scale as early as the early 80s. And what's happening now is the emergence of a similar model for "how you get data out of the cloud?" So SQL is about, "how do I get data off a disk?" GraphQL is, "how do I query all these different services in the cloud?" for all the same reasons that we wanted to move to a declared a paradigm with databases back in the 80s, there's now this move to this declarative query based paradigm in how you talk to microservices and talk across the internet to data sources and backend services.

David: Understood. So now we know what GraphQL is. We've talked about it a little bit here and described it to some extent. I'm sure there's more to it. What is Apollo?

Geoff: Apollo is probably the easiest and fastest and most popular way to build a data graph. And this idea of a data graph, a map of all of our data that we can create, that might sound really elaborate and intimidating and difficult, but it turns out it's something you can bring into your team or your company really quickly. Apollo lets you build a data graph. You can start off just with a few lines of JavaScript, really. You can be up and running in an hour or two because you can build a data graph on top of the existing APIs or backend services that you have without having to change or rewrite anything. You can get into production with that really quickly. So there's both an open source component to the Apollo platform, Apollo Client, Apollo Server. Over a million downloads a week now, so it's gotten really popular, probably the main way that people use GraphQL today.

But then there's also a complete graph management platform. So as you go from one or two developers using GraphQL to multiple teams across your company, and you need to solve those problems of "how do we scale all of our workflows and how do we scale security? What's our practice for as we go from a couple of people using this to a vision about how we're going to roll this out across our whole company and have truly one connected graph?" We've got your back the whole way there, and it's used everywhere from small project startups who are just getting started with GraphQL, all the way up to companies like Expedia that are building one data graph across, I think, 20 plus brands across the whole Expedia group portfolio. Or Airbnb where Apollo is going to power all customer experiences or runs the front page of the New York Times. Powers a lot of stuff at PayPal. It's a very scalable solution to getting started fast without having to get too much politics or buy-in, but scales all the way to the most demanding use cases at scale both in terms of query volume and also in terms of the number of developers who are working on it.

David: Well, I'll say it's also one of the most famous solutions in the GraphQL niche of the API economy, because you can't talk about launching a GraphQL API without Apollo coming up in the conversation. At least that's my point of view as the editor in chief of ProgrammableWeb, where we to think of ourselves as the journal of the API economy. The two are almost synonymous with each other. In fact, off the top of my head, I don't know of another solution that I can recall by name that does what Apollo does for GraphQL, and another solution that also does the same thing for GraphQL. It's like you guys were the first ones there doing this, and there's really nobody else that's risen to the same stature that you have risen to in providing this technology.

I think back to the days of Java where there were a whole bunch of different vendors providing Java solutions, J2EE servers. There was Sun, of course, who invented Java and then there was BEA and IBM. But off the top of my head, Apollo seems to be the one that's definitely got the early jump in terms of being the go to platform for providing a GraphQL APIs. Bob, I want to go to you. You've obviously written a lot for us and I think, if I'm not mistaken, Apollo is primarily running on Node. What have your findings been in terms of other GraphQL platforms?

Bob: There are. There are other platforms out there, but before I go forward with the other platforms, I need to comment on Geoff's modesty, because he's being very modest actually about what GraphQL does. In particular with regard to SQL, one of the benefits that GraphQL brings to the arena is that with SQL you've got to know a whole lot just to get something simple done. Select this field, this field, this field from this... do joins, jump around, turn around three times. And you have no intrinsic way of discovering what those data structures look like. GraphQL brings that right to the forefront. So the concept of a join is completely irrelevant in GraphQL, and you can do what's called introspection, which allows you to inspect the complete type system of the data structure. And that's really, really powerful.

The other thing that GraphQL brings to the table is this notion of subscriptions. Of really realtime messaging. Whereas databases have triggers and you can get around... They're there but they're not first-class citizens. You got to know a lot, you've got to register a procedure around the trigger. GraphQL, it's really just as simple as registering to a subscription as you would any other message, brokerage architecture. Very, very powerful. And what it does is it creates this notion of what's called a hybrid microservice model. Whereas in some world you might have REST, which is intrinsically synchronous, or you might have RabbitMQ, which is intrinsically asynchronous. But bringing them both together into a unified programming and consumption experience is pretty special, and GraphQL does a good job with it. And I've got to say, Apollo does a really, really good job of making that happen.

David: So all of the demonstration code that you've built that people can come and experiment with on ProgrammableWeb was all built using Apollo, isn't that right?

Bob: Yes. Yes it was. It was all built using Apollo. It's straightforward. You have to be careful. I don't want to burst your bubble, Geoff. You have to be careful. But again, Apollo does a good job of saying, here's what you need to be careful about. And let me give you two use cases where one needs to be careful.

The first one was really the Monet project that happened over... I think it was at Netflix or Facebook, I forget, I have to look it up. But what they did is they combined all their microservices together, all their services together under a single GraphQL interface or GraphQL graph. And what happened was is that they had some significant performance problems, and the performance problems really had nothing to do with GraphQL, the specification or any of that. It had to do with they were keeping all the services in their same distribution model.

So for example, service A's coming out of India, service B was coming out of East Elbonia, service C was coming out of Mars. So even though under GraphQL you have one trip to the network, behind the scenes, there's still all this multiple network latency going on. What they did to solve the problem is they brought all the data into a common data center. So latency, we have to always be aware of latency. Pay me now, pay me later. The other thing that we have to be aware of when we're using GraphQL, and Apollo points this out, they're really good about this, is that when you start using subscriptions, you have to be very careful about choosing your message brokerage architecture that's going to back those subscriptions. The one out of the box, it's good for learning and good for experimentation, but it doesn't scale. And again, Apollo mentions this, so we need to be just aware of that.

There are other ones, there's a net solution, there's a Java solution, there's a Python solution, Ruby solution. All people are going out and saying, "Look, if you want to create a GraphQL experience, you have different languages. Remember GraphQL is a specification only. It does not dictate implementation. But I like GraphQL. Hey, you got me.

David: For developers who are listening to this and who want to experiment with GraphQL, they're not tied to going to Node.js, which is the platform for Apollo. They can go to one of these other platforms. Going back to you, Geoff, knowing that, are you going to stay focused on Node.js as your platform of choice or are you going to be offering the same functionality that you're providing on Node.js on other platforms?

Geoff: So GraphQL's a whole ecosystem and the Apollo platform really embraces that whole ecosystem. So yeah, I think you're referring to Apollo Server. Apollo Server is a great way, if you're a front end developer, to build the data graph quickly in your company. And the great thing about it is, it's not a silver bullet that's going to solve every performance problem you have, but it is almost certainly going to be better than what you were doing yesterday. So it's a step forward. And sure, as our apps, as our architectures get more complicated, we always have to pay attention to data residency, latency, and the scalability of our backend systems.

We have several different components. One is Apollo Client. That's how you query your graph. You can get that for React, iOS, Android, some great libraries. You don't have to use Apollo Client. You can use any GraphQL, or even just REST based query system to query your graph. Then in terms of how you connect your backend service to the graph, you can use Apollo Server or you can use any number of different libraries out there for different languages. Whatever language you're building your services in, that's an option for you. You just have a choice. If you want to use Apollo Server, you can write a little bit of JavaScript to bind to existing APIs, or you can use GraphQL Java and there's a lot of different ways that you can get your surfaces on the graph. Not just the GraphQL standard, but other open standards like Apollo Federation is a technology or Apollo tracing. There's various standards that we've defined on top of the graph so that all of these services can connect to build one graph. Because our vision is to have one graph across your company. You can take a few steps to get there, but since it's all about connecting data from different sources, we want to be able to help you integrate and manage where the data's coming from, whatever language you want to use, wherever you want to consume it. That's the vision of the graph.

And then the other component is... We also have Apollo Gateway, which is a component that when you have all these different backend services, what's the query planner that stitches together? What's the ingress point going to be where that query comes, so it can get fanned out to all those pieces. And then on top of that, there's Apollo Graph Manager, which is, now we've got this increasing number of people on this platform. Just like when I'm writing a program, I want to have source control. I don't want the code to just be defined by whatever is on my laptop. I want to have a way of seeing versioning, I want to know what's true, what's the production, what's my process for changing and managing this.

One of the first things you find you want in a graph is a server that has all your schemas on it. What's the definition of my graph? I have many different teams that are... With REST APIs, you might change that API a couple of times a year, if that. With a graph, you can be changing your graph multiple times a day. And because it's designed around a much more agile practice for how we can continually evolve and refine the graph, because we can have much better workloads and tooling, that's possible. But you start to need a server that has all your schemas, you need to be able to manage, how do we secure our graph, how do we federate all these different graphs together? That's Apollo Graph Manager, which is totally language agnostic. We really embrace the whole GraphQL ecosystem. It's not all going to be written in Apollo, but I think one of the reasons why Apollo is so popular is that it really has this perspective of integrating many different GraphQL data sources, and that's why even as there is a lot of other super exciting things happening in the GraphQL ecosystem, Apollo is really about how you get all that stuff into one graph so you can have the benefit of all of it.

David: When I look at GraphQL, I am reminded a little bit of some of the RPC technologies that came before, because on the client side you have to have some basic awareness of the function or the procedure that you're going to call on the server side. And I believe those procedures are referred to as resolvers. So one of the questions I was getting at was, well if you have to write these resolvers, these procedures on the server side to do some amount of processing, some procedural stuff, transformations, whatever it may be, are you saying that Apollo Server is flexible enough that if your language is Java, I can go out and get the Java framework, develop my resolvers there, and plug them right in underneath Apollo Server alongside of the other Node.js platform that you're already running on? Is that how that works?

Geoff: Exactly, yeah. If you want to use Node.js to write resolvers, you can use Apollo Server. If you want to use Java to write your resolvers, you can use GraphQL Java, pick your language, pick your GraphQL library, and it's all standards-based, and then you can combine all that together with Apollo Gateway and you've got one cohesive data graph. Different teams can use different languages and it can all plug together seamlessly into one shared graph that you have one point of view on. And it's amazing. The really cool thing is the graph acts as an abstraction layer. You can browse this connected map of all your data. It's all beautifully documented. The documentation's always up to date. It's always complete. A far cry from what we've experienced with some other API technology in the past. You can make anything on the menu and get it and you don't even necessarily need to know. Today this could be a monolithic Ruby on Rails app for example, and tomorrow this could have gotten factored out into five microservices, some in Java, some in Scala, take your pick, and you the user, you the developer of that iOS app or that web app, aren't affected by that change. So that's the power of putting this abstraction layer and this language we're talking about our data into our architecture.

David: Yeah, that's always been the benefit of API technology in general, is that the client is sufficiently decoupled from the server side of the equation, giving you that flexibility and all sorts of other benefits. We won't go into them here. I talk a lot about them in our APIs 101 videos. But I want to come back to... You were talking a little bit... You mentioned this word, so I want to go back to that, which is federation. When you're building big graphs... And by the way, I just want to... Also for people who are still trying to wrap their heads around this, a real good example of a graph that all of us probably have seen and use on a daily basis is inside of Facebook and any application we use, whether we're going through the web front end or through our mobile app on iOS or Android. This idea that everything is connected in Facebook, you've got friends and you're connected to those friends, and there's pictures that those friends have and you can look at those pictures and those pictures are tagged with other things. That's the graph we're talking about. What do you mean by federation and where does that play a role in all of this and what is Apollo doing to help people with that problem?

Geoff: So typically the way the graph starts in a company is there's a product developer or product development team and they say, "Hey, we have to pull data from many different sources. What if we could stand up a graph so that we have flexible access to all these different existing data sources?" And you can do that very quickly and very easily. What you find is at some point, that sometimes you're a victim of your own success and it starts to get popular and suddenly there's a lot of people that want to use this graph you've built and suddenly you also find, "Hey, a lot of the people that provide backend services, they want to get their services onto this graph too." You end up with this situation where you've got this one code base, it's your graph server, whatever language you built it in. It's getting more and more complicated. You've got this problem where everybody owns it, nobody owns it. It's becoming this central point in your architecture.

And you look at this and say, "Hey, I love what the graph does for me. But I don't want to create another model within my architecture, with this graph server." So what a federation lets you do is it lets you say, "Hey, instead of there being one code base, and one team that maintains the graph, let's divide those responsibilities up over any number of different teams." So maybe I'm going to build the recommendation service and you're going to build the inventory service and you're going to build the payment service. And each of these services can be... They can have its own schema in the graph and they can have foreign key references, so each service, the product service can reference the inventory service or the user service can reference the comment service.

You can have good separation of concerns. So each service can handle just their own concern in terms of the data and the mutations they expose to the graph. But it can all appear as one cohesive graph to the end user through this federated architecture. So that gives you the benefit, on the user side, on the consumption side, of looking at is if it was this one beautiful centrally planned graph, while also having the ability to implement it in pieces and decouple the development cycle of each of those teams, which is necessary once the graph grows past a team or two, because what you find is people see a lot of value and benefit from this and they want to scale it across their company and they need a viable model for how they're going to build and deploy that. So that's what federation is all about.

David: Yeah. And of course the common thought on microservices architecture is that this should be decentralized out to the departments so that you have departmental responsibility for the various services. It shouldn't be a central IT department to wrestle all of this to the ground. If you're responsible for customer data, then you provide the schema and the services for getting customer data into the larger graph. And if you're the one who provides product data in another department, you deliver that. And that division of responsibility works really well for the more advanced organizations today that get the advantage of a microservices based culture. I think that seems to be where things are heading.

Geoff: Yeah. And the graph can provide you a very smooth path to that. So you can start with one graph, it's just built by one team. It's very simple and centralized and you can easily pull out pieces and federate it as you see the value in it, and as it proves itself to you.

David: But what does Apollo do? Do you have a separate offering that ties the whole thing together? Or is that part of Apollo Server? What do you—

Geoff: We created the Apollo Federation Standard based on working with quite a few large organizations who were getting to this point. So we provide both Apollo Gateway, which is a federation gateway, so it solves the problem of... GraphQL in many ways is fundamentally about what's the interface between your data center and the outside world? So you're going to have any number of different ways you might stretch your APIs inside your data center. You might use gRPC, Thrift, REST, you might have some service mesh architect, any number of different ways you can do it. But what's the abstraction we're going to put on that for the outside world? You need a gateway so when those queries come in you have a way of having a query planner, effectively, that says, "Hey, this is query, I'm going to figure out which services it touches and I'm going to figure out the right order to call the services. How do I turn this query, which expresses the user's intent, into a set of operations I'm going to perform on those different backend services?"

We provide a public gateway, which is a complete open source federation implementation that has a query planner, all the stuff you need, and we provide Apollo Graph Manager, which has federation support, because the key problem you have to solve is... Look, now we're going to have all these different teams who are all building their own part of the graph. We want to give those people complete flexibility and complete agility to just go build their piece and not worry about it. But if you think about it, we're actually tackling a pretty complex problem here because if the product service and the inventory service, they both reference each other, we need to keep those things in sync and we need to keep the user in sync with what's happening.

So, Apollo managed federation is a feature in Graph Manager, so these services teams, as they're developing their services right inside their continuous integration pipeline, they can check to see am I staying in sync with the other services, that gives you the confidence to ship multiple versions of your service a day. Knowing the product service is going to stay in sync with the inventory service, it gives you the ability to say, "Hey, I've changed some of these services. I'm going to roll this out to the production servers. I'm going to take all the schemas, check them for consistency, find any problems, validate it against production traffic.""Okay, you changed something. Is this going to break any clients?"

If it's going to break a client, "Hey, can look at the last 30 days of production traffic and say, 'this is going to break an iOS app that we shipped two years ago in India. what do you want to do about it?'" And then take all those changes and roll them up into a deployment of the new version of Apollo Gateway with the new configuration, new query planner inside of it. It solves all those problems for you in a seamless and automatic way, which means you don't have to go build all that stuff to manage your graph. You can just get back to building your app, go back to building your applications, and it's a very fast and easy way to roll this out.

David: I could talk to you forever because it sounds you've understand GraphQL better than anybody on the planet. You guys have had the early mover advantage in terms of providing a great platform so that people like Bob can go off and deliver GraphQL APIs. Bob, one last question for you. When you are writing our big GraphQL series, you didn't know anything about GraphQL when you got started. How bad was the learning curve or was it pretty simple? Were you able to get up and running pretty quickly? That's something that other developers are going to want to know about.

Bob: The learning curve was not arduous. It was okay. It was okay. It took me a a while to wrap my head around some concepts. The big one was really was pagination, was handling pagination, and there's a very particular way you have to do it. Because it is fundamentally stateless, you make one trip to the network and you get back your data, you have to have a way of communicating back to those servers about what your pagination state is. And it took me a little while to get my head around that. There were some other things. It's in the article, I think I point them out in the article, things that I had to pay a lot of attention to. It wasn't, "Oh my God, how did I get here and when is this going to go away?" Actually, it was quite enjoyable. It really was, and it really allowed me to think differently.

David: Well, great to hear. Hey Bob, thank you very much for joining us.

Bob: Happy to be here.

David: Yeah, that's Bob Reselman. He is one of the authors of many of our technical series on ProgrammableWeb, and Geoff Schmidt over there at the Apollo headquarters. I want to thank you very much for joining us.

Geoff: It's a pleasure. Call me anytime.

David: Yeah, we're speaking with Geoff Schmidt there. He's the CEO and co-founder of Apollo. Geoff, where can the developers and everybody else who's watching this video find you guys?

Bob:Apollographql.com.

David: Bob Reselman has of course written many of the technical articles that you can find on programmableweb.com. For ProgrammableWeb, I'm David Berlind. I want to thank you very much for joining us. If you want to find more videos like this one, you can come to programmableweb.com or you can go to our YouTube channel at www.youtube.com/programmableweb, and there you'll not only find this video but a whole bunch of other ones that we've recorded, so feel free to go up there, share the videos if you like. If you want to come to programmableweb.com and find the version there, we also add the full text transcript of everything that was said as well as the audio only version so that if you just want to listen to the audio, you can. In fact, you can get that audio by downloading it from iTunes or Google Play Music as one of our podcasts from ProgrammableWeb's Developers Rock podcast. Thanks again for joining us. We'll see you at the next video.

Content type group: 
Articles

Four51 Rebuilds and Relaunches OrderCloud.io Developer Portal

$
0
0
Super Short Hed: 
Four51 Rebuilds and Relaunches OrderCloud.io Developer Portal
Featured Graphic: 
Four51 Rebuilds and Relaunches OrderCloud.io Developer Portal
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Related APIs: 
OrderCloud.io
Summary: 
Four51 has redesigned and relaunched its OrderCloud.io developer portal. The new portal was rebuilt from the ground up with key features including a UI designed for developers and non developers, persistent tabs, document searchability, and open sourced documentation available at Github.

Four51, an eCommerce solutions provider, has redesigned and relaunched its OrderCloud.io developer portal. When the portal was first launched in 2016, it gave developers access to a B2B eCommerce platform through the OrderCloud.io API. As eCommerce has evolved, developer needs and demands for an eCommerce platform have changed. Four51 aims to meet developers where they are with the relaunch.

“Enterprise development teams have realized that API-first, headless platforms allow them to keep pace with customer experiences,” Rich Landa, Four51 Chief Product Officer, commented in a press release. “These updates to our OrderCloud Developer Portal should continue to make it easier for developer teams to access and utilize our open APIs to design, develop, and deliver custom eCommerce, order management, and B2B marketplace solutions, integrated with their existing backend systems.”

The relaunched portal was built from the ground up with modern technologies and use cases in mind. Key new features include a UI designed for developers and non-developers, persistent tabs to increase productivity, document searchability, and the move towards open-sourcing OrderCloud documentation.

As more and more non-developers move into the development space through modern tools, the new OrderCloud.io portal is easy to access for developers and non-developers. For example, enhanced forms are readily accessible for non-technical users. The persistent tabs are similar to browser tabs. This allows for toggling back and forth more easily. All docs can now be searched. This includes API reference, blogs, and release notes. To open-source the documentation, Four51 has published the docs on Github. Check out OrderCloud.io to learn more.

Content type group: 
Articles
Top Match Search Text: 
Four51 Rebuilds and Relaunches OrderCloud.io Developer Portal

ScopeMaster

$
0
0
API Endpoint: 
API Description: 
ScopeMaster REST API exposes the results of automated requirements analysis. Get quality scores for your individual user stories and sets of user stories. You can also use the API to synchronise your user stories with other repositories such as Jira and Azure DevOps
How is this API different ?: 
ScopeMaster is unique in it's ability to expose requirements analysis results.
SSL Support: 
No
Twitter URL: 
https://twitter.com/ScopeMaster_
Developer Support URL: 
https://help.scopemaster.com/collection/39-api-documentation
Interactive Console URL: 
ScopeMaster
Support Email Address: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

ScopeMaster

$
0
0
API Endpoint: 
https://api.scopemaster.com/v1/
API Description: 
The ScopeMaster REST API exposes the results of automated software requirements analysis. Developers can get quality scores for individual user stories and sets of user stories. They can also use the API to synchronize user stories with other repositories such as Jira and Azure DevOps. Methods are available for Apps, Requirements, Me (my account), and Versions. ScopeMaster is a tool for intelligent software requirements analysis.
How is this API different ?: 
ScopeMaster is unique in it's ability to expose requirements analysis results.
SSL Support: 
Yes
Twitter URL: 
https://twitter.com/ScopeMaster_
Developer Support URL: 
https://help.scopemaster.com/collection/39-api-documentation
ScopeMaster
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
1.0
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

ScrapingBot

$
0
0
API Endpoint: 
API Description: 
Scraping-Bot.io is an efficient tool to scrape data from a URL. It provides APIs adapted to your scraping needs: - Raw HTML: to extract the code of a page - Retail: allows you to retrieve the product description, price, currency, shipping fee, EAN, brand, colour ... - Real Estate: to scrape properties listings and collect the description, agency details and contact, location, surface, number of bedrooms, purchase or renting price, etc. Use the Live test on the Dashboard to test without coding.
How is this API different ?: 
SSL Support: 
No
Developer Support URL: 
https://www.scraping-bot.io/contact-the-team/
Interactive Console URL: 
ScrapingBot logo
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer
Viewing all 432 articles
Browse latest View live