Quantcast
Channel: ProgrammableWeb - Developers
Viewing all 432 articles
Browse latest View live

Post Corona, Code for America Looks to Join Devs, Civic Techies and Gov Officials To Transform GovOps

$
0
0
Super Short Hed: 
Code for America Summit To Join Devs, Civic Techies and Gov Officials Over Common GovOps Goal
Thumbnail Graphic: 
Code for America Summit To Join Devs, Civic Techies and Gov Officials Over Common GovOps Goal
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Includes Video Embed: 
Yes
Includes Audio Embed: 
Yes
Summary: 
While private sector companies such as Amazon, Facebook, Google, etc. draw closer to perfected digital states, the public sector (government agencies and orgs) are woefully behind. In bringing together developers, civic techies, and government officials, Code for America is hoping to change that.

Editor's Note: The interview in this article took place prior to the new standard operating procedure to cancel large gatherings due to the current coronavirus pandemic. While the Code for America Summit has been canceled (event registrants will be refunded their money), Code for America is currently exploring its virtual event options according to an announcement on its web site. Even amid the event's status, this article and the accompanying interview offer important insight into the organization's goals and objectives as well as the importance of the Summit in whatever form it eventually takes.

Code for America founder Jennifer Pahlka's passion for civic duty is infectious when she talks about ordinary citizens stepping up to transform local, state, and federal governments to modern digital times. "I think government really wants to do well" said Pahlka in my interview of her (available below in video, audio, and full-text transcript forms). "But we built government in a pre-digital age and it's a little bit harder to move this very large risk-averse institution into the kinds of ways that companies that are thriving in the digital age tend to work." It's a bit ironic given how far ahead the Federal Goverment was back in 1975 when it took control of the ARPANET; a 57-node distributed computer network from which the Internet was born. But that's exactly what's on the agenda of the forthcoming Code for America Summit as more than a thousand developers, civic technologists, and government officials will gather virtually for the annual affair to acclerate the pace of digital improvement at all levels of government.

Given my own personal involvement in organizing the Washington DC-Area API meetup, which is often attended by IT people from all across the Federal Government, Pahlka's characterization could not be more spot on. Between the government's age, resistance to change, shifting priorities after every election, and massive technical debt, it isn't difficult to see how businesses are digitally racing ahead of government at all levels, resulting in a thorny if not dangerous technological imbalance between the public and private sectors. Similar to the consumerization of IT, as companies like Facebook and Spotify train citizen expectations for engaging with any organization, there are parts of the government (again, at all levels) that are woefully behind. Put another way, why should working with your local Department of Motor Vehicles be any less modern than engaging with a playlist on Spotify?

As you may have guessed by now, the vent's virtual nature is due to concern over the unfolding Coronavirus crisis which, in many ways, exemplifies the gap. The disease is literally moving faster than the local, state, and federal governments' abilities to not just keep up with it, but to keep citizens informed from a single source of truth. If the governments were behaving more like a modern day digital enterprise, there might already be a mobile app that puts all the necessary information and precautions at the fingertips of citizens. But that sort of agility has been elusive when it comes to government.

But in America, there's a saying (grafted from the Gettysburg Address); it's a "government of the people, by the people, for the people." In taking that principle to heart, Code for America has, over the last ten years, marshalled the forces of civic-minded technologists who'd rather not just wait for the government to figure things out on its own. In other words, as citizens, we're not going to elect our way into better GovOps. To so extent, citizens have to take matters into their own hands.

"We're just trying to give them some of the principles and practices of the digital age and apply it to government, because, in government, that's where we're really serving everyone." Pahlka said. "These services are enormous. They matter hugely to the American people and we've got to give government that competency and that capability of being great at digital. And that just requires some retraining and some different perspectives. It's really about being able to adopt the way that the internet works in government, in the service of the American people.

To hear more about

Code For America

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Audio-Only Version

Editor's note:ProgrammableWeb has started a podcast called ProgrammableWeb's Developers Rock Podcast. To subscribe to the podcast with an iPhone, go to ProgrammableWeb's iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.

Tune into the ProgrammableWeb Radio Podcast on Google Play Music  Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Transcript of: CodeForAmerica

David Berlind: Hi, I'm David Berlind. Today is Tuesday, February 18th, 2020 and this is ProgrammableWeb's Developers Rock podcast. With me today is Jennifer Pahlka. She is the founder of CodeforAmerica.org. They've got a big event coming up. Jennifer, thanks for joining us. Tell us all about this event that you've got coming up, what in March, I think it is?

Jennifer Pahlka: Yeah, it's in March. It's in DC. Hi David. Thanks so much for having me on.

David: It's great to have you. Yeah, it's been so long. We used to work together a long time ago, so it's really great to see you again.

Jennifer: Yeah, I'm delighted, and it's a wonderful to be able to talk about the Code of America summit, which is happening middle of March in DC, and it's basically where everybody who wants to make government work better in a digital age, comes together and talks about how hard it is and how much progress we're making and really, what the world could look like if government got really good at digital services.

David: Well, why does the government need something like CodeforAmerica.org floating around in their midst?

Jennifer: Well, I think government really wants to do well, but we built government in a pre-digital age and it's a little bit harder to move this very large risk-averse institution into the kinds of ways that, that companies that are thriving in the digital age tend to work. So there's speed issues. There's issues of really working with users in the way that our best digital services are built is really tapping into what users want, and that can be really hard in government. So, we're just trying to give them some of the principles and practices of the digital age and apply it to government, because, in government, that's where we're really serving everyone. These services are enormous. They matter hugely to the American people and we've got to give government that competency and that capability of being great at digital. And that just requires some retraining and some different perspectives. It's really about being able to adopt the way that the internet works in government, in the service of the American people.

David: And so as CodeforAmerica.org sort of an officially sanctioned body by the government. Or did you just sort of sprout up and start helping because you felt it was sort of your civic duty to do this?

Jennifer: Yeah, that's right. We thought it was our civic duty to do this. So I started Code for America 10 years ago. It's a 501(c)(3)so yes, we're sanctioned by the government as a nonprofit and we work really closely with government. So what we say is, look, we can change this. It's very hard to change government, but we can do it if we do three things; we show what's possible by making government services so good that they inspire change. So if you try, for instance, using the application for food stamps, supplemental nutrition assistance program [SNAP], it's very cumbersome. It's not built well for users. It doesn't work on a mobile phone. And we've shown that it can be dramatically better with a service called GetCalFresh. It serves all with California now for anybody applying for SNAP, that's just sort of ups the bar and gets people thinking about what could be possible.

But then, we help government people do this better themselves. So that's what the summit is all about is we can't do all the services for government. But if we share how this works and how you apply these principles and the practices in government, then people in government can make their own services really great.

And then the third thing we do is we build a movement. There's now this amazing movement of people who understand the digital world and either are coming into government or are already in government who are getting together and saying, "Let's write a new playbook. Let's hold everybody accountable to a much better level of service, a much better experience and better outcomes." And that's really the movement that then drives even more better examples and even more ability to get people on board with the practices.

David: Wonderful. So I was looking at the website for the summit and I saw that you're expecting something like about a thousand people based on what you just said, are the majority of those people in government now and coming to get inspired or is it a blend of people from government and then others like you who felt it's their civic duty to help the government out and sort of a collaboration? What's the attendee mix like?

Jennifer: Yeah, it's a great mix, a lot of the people are in government, in fact, a little somewhere near the majority of people are in government and they're either already in one of the groups like the United States Digital Service or the Colorado Digital Service or San Francisco has its own digital service, which are groups in government who sort of self-define as running by the Code for America playbook. They're doing things in a user-centered era and data-driven way. Sometimes they struggle with that because government has a bunch of constraints that make it hard to do that. But they are figuring out those ways to do it and they're successfully doing it in many services in government that we have a long way to go.

The rest of it is, is a mix of folks, some of our vendors are there. We've got what we call civic technologists who may, who work like we do sort of from the outside but improve government, you know from that sort of an outside perspective. We've got people who are just learning about it. I mean even folks from like staffers from congressional offices who were in charge of the oversight of government digital are also coming to learn and say, "Wait a minute, there's a better way to do this and we're going to have to be part of that solution."

David: And so this is a mixture you sort of talking about cities and States and so, so does Code for America work across all levels of government from municipal on all the way up to federal?

Jennifer: Our biggest projects right now are with state governments. State to do a lot in the areas that we specialize, which is the social safety net and the criminal justice system. We also worked a lot with counties and then we have 82 cities around the country that would have a Code for America brigade, which is essentially a chapter, a volunteer chapter. So we specialize in working at city, county, and state level and our big projects tend to be with them. However, the principles and practices that we articulate and evangelize to the rest of government are applicable across federal as well. And so the attendees a Code for America summit are federal, state and local. In fact, last couple of events it's been about a third, a third, a third split between those three groups. If you're working with States, you're also working with the federal government in the sense that the federal government regulates most of these programs like SNAP and Medicaid and so we do work with them as well, but they aren't a client if that makes sense.

David: I see. Now the name of the organization is Code for America. And this is the Developers Rock podcast. When I hear the word "code", I think developer, because you're coding for somebody. So what's in it for developers and developers come to this event and what do they do when they get there?

Jennifer: Yeah, so a lot of them, if you look out at the speakers that are there speaking, a good chunk of them are developers. They have developed wonderful digital services at the Veterans' Administration. It's with state governments, etc. And what they get out of this more broadly is that they get to work on the things that matter most. We have had so many developers come from, whether it's the private sector or social sector, and come into government and say, now that I'm coding, a better veteran's healthcare application, I can never go back because this is so... I'm so aware now of the people that I'm helping, I'm helping people who need help the most. And my impact is bigger than it has ever been on parts of our country that absolutely need the most help. And so, what developers get out of it really get out of upcoming to summit, is a community and the skills and the tactics that need to be successful making those digital services in government, but what they're getting at it more broadly is these incredible meaning and satisfaction in their jobs.

David: Sure. A very meaningful work and they get to kind of rub shoulders with birds of a feather, other people who feel the same way. So it sort of escalates that feeling of altruism and contribution to the betterment of the nation. One question I have for you though is, that a lot of us, and I don't know if you've given this any thought. A lot of us look at what's going on at the federal level of our government and we see a lot of nothing getting done. It almost seems like the government doesn't have an interest in getting things done. You have people yelling across the aisle at each other, but now they also complain that we're not getting any legislation done and they blame each other if the government is sort of stalled in this way by the people who lead it. Is there any hope of this movement coming from under and getting things going, sort of greasing wheels of government, so to say, as you pointed out?

Jennifer: I think beneath the surface of that dysfunction that we all experience, there is real progress. Six years ago, if you were a veteran, you could literally... almost nobody could get through the healthcare application. It required this very specific combination of an outdated browser and an outdated version of Adobe PDF. And if you didn't have that exact combination, which just happened to be the sort of weird outdated combination that VA Staffer computers had, you couldn't load the application form.

That's just one of a dozen things that wrong with that one specific service. Now you can and people do and they use a service that looks so good that they go, wait, is this... did government build this? Because it's simple and it's easy and it's clear. That's the work of United States Digital Service and things like that are happening all over government and it's not making the headlines but it really is progress.

I mean the same thing with our SNAP application that we're doing in different States now, For example, we had these laws that are passing in States around the country, decriminalizing various convictions, often related to the decriminalization of marijuana, but not always. And we've been working with States to figure out how to clear those convictions. That doesn't involve 10 months of paperwork but just says, wait a minute, this person has a conviction, it's in a database. Let's just change the record in the database. And that's the kind of progress. It's not just that we're making forms better, it's that we're helping people in government understand how the digital world works and be far more efficient. Like leap-frogging the process and just saying, this is just a matter of changing a record in a debate database. Let's help you do it.

Hundreds of thousands of people have already gotten relief because we've been able to implement these laws in a hundred times faster than they would've been and now millions more across the country are going to. That's hope, right? That's like there is functioning government happening, it's getting better, we're getting better at doing government right. And that's happening at the same time as all of that political dysfunction. I felt very lucky that I get to look at that every day and use that as a real counterpoint to some of the less beautiful parts of government.

David: Right, so while some people who are leading the show don't seem to be able to get anything done, you're busy at work, under the hood, getting a lot of stuff done. And I would have to kind of support the fact that there are people who are looking to do that kind of meaningful work. I don't know if you know this, but I'm the co-leader of the Federal API meetup in Washington DC the first Tuesday of every month, I am a Gray Brooks's wingman. I don't know if you remember Gray Brooks from your days and in DC.

Jennifer: He's the best.

David: He is the best. He's an amazing human being. And so, I'm down there every month helping out and helping that meet up run. And of course, not only can everybody who's watching this come to that meetup if you're in the DC area, but also I get to meet a lot of people from the various corners of government, especially the federal government, and they're all very interested in moving the ball forward. So let's go back to the summit real quick. March 11th to March 13th. Can anybody come? Is there a fee? How do you sign up?

Jennifer: Register at codeforamerica.org/summit, it's a cheaper rate if you're in government. Those budgets are a bit limited. Private sector folks pay a bit more, but it's absolutely affordable and the content is amazing.

David: And you might get to meet Jennifer Pahlka who is the founder of CodeforAmerica.org there. Right.

Jennifer: I will absolutely be there. I'm giving a talk and I'll spend the best three days of the year with the people that I admire the most.

David: Wonderful. Well, there you go. That's the Code for America summit. It's going to have workshops, receptions, breakout sessions, lightning talks, and keynotes. Of course, one of those will be by Jennifer here. Jennifer Pahlka. Thank you very much for joining us.

Jennifer: Thank you so much David. This has been fun and it's great to see you.

David: It's been terrific to talk to you. We've been speaking with Jennifer Pahlka. She is the founder of CodeforAmerica.org they're running their big annual event in Washington DC March 11th to March 13th, 2020 hope to see you there. For now, I'm signing off David Berlind, editor in chief of ProgrammableWeb. If you want more videos like this one, just go to our YouTube channel at www.youtube.com/programmableweb or you can go to ProgrammableWeb.com and you'll find an article with this video embedded and all of the full text, the transcription of what Jennifer just said to you, as well as the audio-only version if you prefer it in podcast form.

Until the next video, thanks for joining us.

Content type group: 
Articles
Top Match Search Text: 
Code for America Summit To Join Devs, Civic Techies and Gov Officials Over Common GovOps Goal

OpenWater Opens Application and Review Management Software to Developers

$
0
0
Super Short Hed: 
OpenWater Opens Application and Review Management Software to Developers
Featured Graphic: 
OpenWater Opens Application and Review Management Software to Developers
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Related APIs: 
OpenWater
Featured: 
Yes
Summary: 
OpenWater is opening up its software platform to third party developers. Through the OpenWater SDK and API, third parties can directly integrate application and review management to their list of services. Prior to the API and SDK, users had to outsource the process to OpenWater.
OpenWater, an application and review software platform, is opening up its platform to third-party developers. Through the OpenWater API and SDK, third parties can directly integrate application and review management to their host of services. OpenWater's platform is already responsible for over 25 million application and review submissions a year, and now that power is delivered to third party developers.

"As a developer myself, I cut my teeth building on top of WordPress," Kunal Johar, OpenWater CTO, commented in a press release. "It just made sense to open up our platform to agencies who can understand the specific nuances of their customers better than we can. We want to best prepare our customers and set them up for as much success as possible."

Looking through reviews, applications, and other online submissions is a time-consuming and tedious process. That's why some of the biggest companies in the world have outsourced a large portion of this process to OpenWater. OpenWater's new developer-first approach to this process will allow third parties to leverage the technology that OpenWater has used to scale this process.

OpenWater's REST API is available via Swagger API. The SDK is built on Nuget for .NET environments. Much of the functions of the programmatic offers include automating the flow of submitted data. For complete details and examples, check out the developer docs.

Content type group: 
Articles
Top Match Search Text: 
OpenWater Opens Application and Review Management Software to Developers

Salesforce Commerce Cloud Exposes APIs To Public With New Try-Before-You-Buy Developer Portal

$
0
0
Thumbnail Graphic: 
Interview with Andrew Lawrence, director of product management at Commerce Cloud at Salesforce
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Includes Video Embed: 
Yes
Includes Audio Embed: 
Yes
Related Companies: 
Related APIs: 
Salesforce Commerce Cloud Assignments
Salesforce Commerce Cloud Campaigns
Salesforce Commerce Cloud Catalogs
Salesforce Commerce Cloud Coupons
Salesforce Commerce Cloud Customers
Salesforce Commerce Cloud Einstein Recommendations
Salesforce Commerce Cloud Gift Certificates
Salesforce Commerce Cloud Orders
Salesforce Commerce Cloud Products
Salesforce Commerce Cloud Promotions
Salesforce Commerce Cloud Shopper Baskets
Salesforce Commerce Cloud Shopper Customers
Salesforce Commerce Cloud Shopper Gift Certificates
Salesforce Commerce Cloud Shopper Orders
Salesforce Commerce Cloud Shopper Products
Salesforce Commerce Cloud Shopper Promotions
Salesforce Commerce Cloud Shopper Search
Salesforce Commerce Cloud Shopper Stores
Salesforce Commerce Cloud Source Code Groups
Salesforce Commerce Cloud CDN Zones
Related Platform / Languages: 
Summary: 
In this presentation of ProgrammableWeb's Developers Rock Podcast, David Berlind, editor in chief of ProgrammableWeb interviews Andrew Lawrence, director of product management at Commerce Cloud at Salesforce. Salesforce Commerce Cloud has launched a newly redesigned API developer portal.

Salesforce Commerce Cloud has launched a newly redesigned API developer portal that, for the first time, publicly exposes the type of API reference documentation and tooling that was previously only available in closed-door fashion for subscribers to the service.

Commerce Cloud is a cloud-based service from Salesforce that includes all the functionality a merchant needs — from inventory management to shopping cart capability to electronic payments and more — in order to open and maintain a virtual storefront. 

"With Commerce Cloud we've had APIs for quite a while. They haven't really been very well-known and certainly not publicly accessible. They've often been behind our login authentication walls. The documentation is out there, but to get to [it] has been difficult” said Commerce Cloud Director of Product Management Andrew Lawrence. "So we built a new API presentation layer that describes the APIs and puts out there what's available, and then it allows people to actually use a mocking service against the APIs and you can see what responses would come back and what things would look like for the APIs.”

ProgrammableWeb’s interview of Lawrence is available below as video, audio-only, and in full-text transcript form. Lawrence can be seen in the video providing a demonstration of the new developer portal.

Mocking services are well-know to developers as non-production samples or “mocked” versions of an API. Whereas a production API for a given service exposes production data at one endpoint, a mocking service typically exposes sample data for anyone to consume at a different, less secure endpoint. This approach, which is considered a best practice for engaging new and existing developers, makes it possible for “tire-kicker” developers to go as far as they want in trying an API before they buy it (or internally influence the decision to buy it within their organizations).


The Salesforce Commerce Cloud dev portal includes 20 APIs that span the gamut of Commerce Cloud's back-end functionality

With the completely redesigned API developer portal, non-subscribing developers can engage the mocking service through an on-site interactive console or they can engage the sample endpoint with their own tools including a command line interface (using a tool like cURL), their own source code, or via a downloadable node.js-based software development kit (SDK). Node.js is one of the platforms that is also supported by Heroku, a Salesforce-operated platform-as-a-service. In other words, developers wanting to experiment with the API using node.js (server-side Javascript) can do so without having to stand-up their own node.js server. They can just use Heroku instead.

The new developer portal, which was built using MuleSoft’s Anypoint Community Manager, includes the reference documentation and resources for 20 Commerce Cloud APIs, some of which contextually draw upon the capabilities from Salesforce’s other clouds and technologies. For example, one of the 20 APIs is called Einstein Recommendations and it relies on the artificial intelligence capabilities of Salesforce Einstein to recommend purchases to customers based on their previous activities such as the products they viewed or added to their shopping carts.

Given how the twenty APIs offer programmatic access to all of the underlying capabilities of Commerce Cloud, they enable developers to leverage the service as a headless e-commerce platform. In other words, instead of using the standard user interface elements that come with Commerce Cloud, retailers and other merchants can build their own user experiences using their own developer tools and platforms. The same APIs can also be used to integrate Commerce Cloud with other non-Salesforce applications and platforms.

Developers are free to anonymously browse and interact with most elements of the portal. In fact, in the demo that Lawrence gives during the interview, he was working with the portal anonymously. The one area of the portal that requires users to login is the forums area where developers can exchange messages with each other as well as with members of Commerce Cloud’s developer relations and support groups. 

As for what’s coming next, Lawrence mentioned how the portal currently focuses on B-to-C use cases and that he and his team will be adding more B-to-B related content in the near future. He and the team are also looking onto what SDKs, beyond node.js, they’ll be looking to offer next. 

The different versions of the interview (video, audio, and text) are embedded below.

Disclosure: Although Salesforce is the parent company to MuleSoft which itself is the parent to ProgrammableWeb, this coverage was independently derived. At no time was ProgrammableWeb asked or required to cover this news. The editorial decision to cover Commerce Cloud as an API provider was no different from our decisions to cover news from other API providers.

Interview with Salesforce Commerce Cloud Director of Product Management Andrew Lawrence

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Audio-Only Version

Editor's note:ProgrammableWeb has started a podcast called ProgrammableWeb's Developers Rock Podcast. To subscribe to the podcast with an iPhone, go to ProgrammableWeb's iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.

Tune into the ProgrammableWeb Radio Podcast on Google Play Music  Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Full-Text Transcript of Interview with Salesforce Commerce Cloud Director of Product Management Andrew Lawrence

David Berlind: Today is Monday, March 16th, 2020. I'm David Berlind, editor in chief of ProgrammableWeb, and this is ProgrammableWeb's Developers Rock podcast. With me today is Andrew Lawrence, director of product management at Commerce Cloud at Salesforce. Andrew, thanks for joining me on the show today.

Andrew Lawrence: Thanks. It's great to be here.

David: Oh, it's great to have you. Let's start off with a real simple question. What is Commerce Cloud?

Andrew: Sure. Commerce Cloud, for Salesforce it's the commerce arm of Commerce Cloud, so delivering websites to sell to Commerce customers is the primary purpose of Commerce Cloud that's out there. So many of your favorite retailer websites that are out there running today, they're actually running Commerce Cloud from Salesforce.

David: Okay. So this is a sort of website where you might load up all of your inventory, everything you have to sell, put the prices on it, and then when people want to buy something, they literally pull the trigger through your technology, and that's how it ends up showing up at their house or their place of business.

Andrew: Yep, that's exactly right. We build the basket, we put everything together, we process the payment and send it out.

David: Terrific. Well, okay, now you're here on ProgrammableWeb, on the Developers Rock podcast. We love developers and the only reason we have people on the Developers Rock podcast is if there's something to do with APIs. So what does Commerce Cloud have to do with APIs?

Andrew: Yeah, that's a great reason why I want to be here today. So Commerce Cloud has just released our new developer portal, which contains a new set of APIs that we are providing for customers, and for everyone to actually get in and look at the APIs. So these are new APIs that would help you in building a commerce website to get it up and running. So things related to getting customer information, product information, building baskets and orders, those types of APIs.

David: Now I've heard this referred to many times as headless commerce. Is that what you guys call it?

Andrew: Yeah, I certainly use the headless commerce term. So headless commerce is kind of a way of describing how many people are starting to look at building websites for their commerce interactions, where they build and want to control the head of the website. So the way that it looks in the way that presents itself to the customers, they build that themselves and then they want to connect directly to APIs to handle everything that they need on the other side. Whereas previously, commerce instances before were pretty monolithic and build them all together and now they want to separate that presentation layer from the actual work layer or the API layer.

David Berlind: So it gives them a real opportunity to customize things exactly the way they want. They can use frameworks and languages that sorts of a front end development tools that they're really familiar with, just connect through the APIs and then they get all that backend functionality that you were just talking about. The basket and the pricing and the inventory on the other side of the API.

Andrew: Correct. Yep, that's exactly right.

David: That's very cool. Now, how long has it been that you've been providing the APIs? It's relatively new or did you have API's before this new announcement?

Andrew: Yeah, so with Commerce Cloud we've had APIs for quite a while. They haven't really been very well known and certainly not publicly accessible. They've often been behind our login authentication walls. The documentation is out there, but to get to [it] has been difficult. So we needed a new kind of API presentation layer, and that's what we've built up and put in there. So we built a new API presentation layer that describes the APIs and puts out there what's available, and then it allows people to actually use a mocking service against the APIs and you can see what responses would come back and what things would look like for the APIs.

David: Wow. There's a lot to unpack there. I want to start with, so before, the APIs are sort of behind a firewall. The only people who could actually access them were developers that worked for what? Licensees of Commerce Cloud?

Andrew: Correct, correct.

David: Okay. And so now they're available to any developer? You don't have to be a licensee or a subscriber to Commerce Cloud?

Andrew: Yeah. So any developers can get in now and see the APIs, and we have a mocking service that's available. So to actually use them directly against a Commerce Cloud instance, you obviously have to be a customer, but anyone can now use these APIs, use the marketing services, download an SDK that we provide for them, and use the SDK and see how you would interact with the APIs.

David: Right. And so the mocking service is sort of like example data, so they can play with the API, experiment with it, decide whether or not this is an API that's robust enough for them to get their jobs done. It's a great way to kind of sample the service, try before you buy, before you become a subscriber, when you're actually accessing the real live data. Is that right?

Andrew: Right. That's exactly right. We want people to be able to get a hold of that, kind of understand what they're looking at before they need to actually be a customer and get a full instance.

David: Why the change of heart? Before it was behind the firewall, only subscribers could get access to it. Now suddenly you're opening it up to all the developers that are out there. That seems like a pretty exciting change.

Andrew: Yeah, I mean we really want to start to get more people involved with the development on top of Commerce Cloud. We can't do everything. We are looking to build a platform that's available, and we really look to others, not only our customers, but our partners to build things on top of the platform to make it usable by many different market segments, different types of retailers, different types of customers, different types of other companies and people that may do some commerce things. But maybe they're not retailers, they may do other things. So looking to expand that, we needed to have something that was more accessible.

David: Sure. So this is one new avenue for business, right? Before developers weren't really a great way to bring new business into Commerce Cloud, but now we've got developers who have a chance to experiment with it. They might convince the other people in their organization, "Hey, this is the one we want to work with," and suddenly Commerce Cloud has a new customer.

Andrew: Yep, exactly.

David: Yeah. That's a best practice that we see getting used more and more across the API economy. I'll tell you what, can we take a look at what this looks like?

Andrew: We can, yeah.

David: That'd be awesome.

Andrew: All right, let's jump out and look here. All right, so now jumping in, we can look at what the site is right now. So you can go to developer.commercecloud.com and see the full site that's available there. There's a lot of information on here about various different developer topics related to Commerce Cloud, some with our new APIs, some with our current storefronts that are available out there and how to build things on top of Commerce Cloud. But I think the real reason that we built this site is for what we have in this getting started kind of area that's in here. So this kind of brings you through and shows you how to get started using these new APIs in a headless manner. It gives you details about how to deploy, and the sample app, we provide the sample app and the SDK that are available for node.js. That's the primary language that we're looking at right now. You can download, look at these, they're open source things that are available to everybody, and all the details you can find right here on the site, in terms of configuring it and getting it set up.

And then the next piece that comes in is to be able to look at the full API reference that's available. So we have 20 APIs that are currently out there and available to use. Each of these APIs is listed in here with their kind of a description of what they do. And then on any of them you can jump into the API and look at it from the full details of what the API can do. Each API has a developer guide, so kind of an overview, and talks about how you authenticate and use the API use cases for it.

In addition, there are things related to the API specification itself. So from within here, you can see all the end points that are available for the API. We can see what they look like, we can dive in and see full details of what's there. We can see code examples for calling the API as well as security information, and then most importantly over on the side we can see details to use the mocking service. So you can enter in the parameters that are needed and submit the requests, and see what the responses will look like coming back from the authentication service. So we can see the full details of the campaigns, this campaign is API, but each of the APIs have these available, and everyone can look at the APIs and see what the responses are, and then use the SDK to interact with them as needed.

There's also a full developer forum community that's out there where people can go and post questions, and we're monitoring those and we can give responses to the communities. And then there's a support section that also has some other detailed things about frequently asked questions and about the site in general. That's really the beginning of this site. The site launched just in February, but we're continuing to update it and we'll keep continuing to update it. Adding more and more information for developers and giving people more information on how to really get started using Commerce Cloud.

David: This looks super clean. How did you guys build this?

Andrew: So, thankfully, since prior to Salesforce we were able to leverage Salesforce's Community Cloud product, but also a new product from MuleSoft called the API Community Manager. And that's really what's generating a lot of the API information you see from here. So from our usage of MuleSoft, we were able to easily step into being able to make those APIs available publicly and be able to see and browse through them.

David: So MuleSoft's API community manager provides essentially what is a developer portal. And were you able to customize it or is there a template that you use? How did that work?

Andrew: Yeah, we did start with an initial template, so everything that's in here started from the initial template, but then it's a fully capable portal that we're able to customize, give it our own look and feel, give it our own authentication mechanisms to get into the portal. And then all the content we were able to create within the portal and link to it from here. So yeah, it was very flexible with what we could do. We actually had to limit ourselves a little bit because we had kind of a deadline to get things out there and running. So there were many things where we had to keep saying, "No, we'll get to that later. First we have to get it out the door." But now that it's out the door we can start. The sky's the limit. Start building up what we need to on top.

David: Thank you for the demo. So one question I have is about the business model. You mentioned of course that developers can come, any developer and come in and give it a try. Do they need... to start things off, can they just come in anonymously and just start working with it or do they have to sign up or register in some way just to really play around with the mocking service?

Andrew: No, to play with mocking service you can use it anonymously. So I wasn't signed in within the demo in there. The only reason you need to sign into the site is if you want to post content to the community. So to post questions or comment on a question, yeah, you'll need to log in so that we have that information for your posting. But otherwise anyone can anonymously use the APIs and the SDK and the sample app that we have are open source and now available in GitHub, so you can get them from there.

David: And what do you have to log in with? If you're not a subscriber, is there some other way to set yourself up and create a user ID on the service? How does that work?

Andrew: Yeah, you can create user ID directly through there. It actually uses Salesforce Trailblazer identity. So if you've done anything else with Salesforce, or if you used any of their Trailblazer tools or anything like that, you'd probably already have a Salesforce ID, so you can just use that to log in, but the site will guide you through that process and get you there.

David: And then what does it take to become a subscriber that has access to the APIs? I'm assuming it's a little bit like a Sales Cloud and some of the other Salesforce clouds where there are different tiers of service, some have access to the API, some do not. Is there a similar setup for Commerce Cloud?

Andrew: Yeah, Commerce Cloud is a fully functioning commerce system. So there is a sales process to get in there and get the license to Commerce Cloud. But once you have licenses for the Commerce Cloud, any of the licenses for Commerce Cloud include the API layer. So there isn't really a separate API piece for Commerce Cloud. If you have a license for Commerce Cloud today, you can use the APIs.

David: Wow, that's great. And you mentioned that you're going to continue to iterate on what you've done. You have anything you can talk about in terms of what's coming?

Andrew: Sure. Yeah. There are some things with the site itself that we want to update. We want to get a little bit cleaner on some of the UX and things that are happening on there and get more details with the API and giving some more concrete examples on each of the APIs. But then as a broader perspective, we're looking to start adding more APIs from other Salesforce entities. So the APIs that you see on the site today deal with the B-to-C commerce of Commerce Cloud, so that's the business to consumer piece. There is also a business to business, B-to-B piece that we'll be looking to add into here as well. As well as just generally bringing a lot of other developer information into the site. Site itself is really much focused on these new APIs for right now. We have a lot of information related to developing our Commerce Cloud. Not in using the APIs, but using some of our other tools. We'll start to bring some of that information in here so that we really just have one place where you can go to learn everything about developing on top of Commerce Cloud.

David: You mentioned [node.js], which a lot of developers love, but some developers work with other languages. Any more SDKs on the way?

Andrew: At this point in time, we haven't actually decided what would be next. So we jumped into node after doing some research and looking at our customer base and what was out there and [what] our developers are doing. Node had the largest coverage amongst all of them, so that's why we chose that first. We've looked at some other options, and maybe even some specific mobile options. Maybe iOS. Then we've looked at [Vue.js], but we haven't really set and decided what should be next. We're trying to see how traction goes with the node.js SDK, and adoption, and then we'll see where we go.

David: Well great. I can't wait to check back in with you to see how things are at some point later in the year.

Andrew: Great. I'd love it.

David: Yeah. Well, okay, we've been speaking with Andrew Lawrence. He is the director of product management at Commerce Cloud at Salesforce. Andrew, where can developers find this amazing portal?

Andrew: Everybody should go to developer.commercecloud.com. That will get you in there. You can start exploring everything from there.

David: Terrific. Well thank you very much for joining us today.

Andrew: Thank you. This was great.

David: Well, that's a wrap. It's another Developer's Rock podcast from ProgrammableWeb, for ProgrammableWeb. I'm David Berlind. If you want to find more videos like this one, you can come to ProgrammableWeb. We've got them there along with the full text transcript of every single interview we've done and you can also listen to the audio only version if you want to download it from Google Play Music or iTunes, and you can also go up to our YouTube channel at www.youtube.com/programmableweb. All our videos are up there as well. Until the next podcast, thank you very much for joining us.

Content type group: 
Articles

Google Ending Support for JSON-RPC and Global HTTP Batch

$
0
0
Super Short Hed: 
Google Ending Support for JSON-RPC and Global HTTP Batch
Featured Graphic: 
Google Ending Support for JSON-RPC and Global HTTP Batch
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Featured: 
Yes
Summary: 
Google announced that it will discontinue support for the JSON-RPC protocol and Global HTTP Batch. Google had originally planned to discontinue support for both features in 2019, but has now extended the deadline to August 12, 2020. Visit the blog post announcement for migration assistance.

Google announced that it will discontinue support for the JSON-RPC protocol and Global HTTP Batch. As Google continues to invest in its API infrastructure, it must support the latest API technology, which continues to evolve quickly. Increased performance and enhanced security have led Google beyond these two technologies, and JSON-RPC and Global HTTP Batch are no longer compatible with Google's infrastructure moving forward.

"Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs," Shilpa Kamalakar, Google Technical Program Manager, commented in a blog post announcement. "As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints."

Although Google had originally planned to end support for both features last year, it has now extended the deadline. As of August 12 of this year, neither will be supported. Transitioning away from these features takes some development work, and associated downtime. To assist developers, Google has published a downtime schedule.

If you send requests to "https://www.googleapis.com/rpc" or "https://content.googleapis.com/rpc". you currently use JSON-RPC and need to migrate.  If you form homogeneous batch requests using Google API Client Libraries or using non-Google API client libraries or no client library (i.e making raw HTTP requests), you need to migrate. To see answers to most migration questions, visit the blog post announcement.

Content type group: 
Articles
Top Match Search Text: 
Google Ending Support for JSON-RPC and Global HTTP Batch

Visual Composer

$
0
0
API Endpoint: 
https://visualcomposer.com/
API Description: 
<br>The Visual Composer API allows developers to programmatically create custom elements within the Visual Composer WordPress website builder.</br><br>An element is an independent component of the system that represents an HTML based block with the ability to output media and dynamic content. While Visual Composer provides prebuilt elements, this API enables users to create custom elements.</br>
SSL Support: 
Yes
Twitter URL: 
https://twitter.com/VisualComposers
Visual Composer
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Type: 
Supported Request Formats: 
Architectural Style: 
Version: 
1.0
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Visual Composer

$
0
0
API Endpoint: 
https://visualcomposer.com/
API Description: 
<br>The Visual Composer API allows developing custom elements to build a website.</br><br>An element is an independent component of the system that represents an HTML based block with the ability to output media and dynamic content.</br><br>Download, Visual Composer, a web site builder for developers at <a href="https://visualcomposer.com/features/developers/">https://visualcomposer.com/features/developers/</a> </br>
How is this API different ?: 
SSL Support: 
Yes
Twitter URL: 
https://twitter.com/VisualComposers
Interactive Console URL: 
Visual Composer
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Type: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Codugh Pays Developers for Every API Call

$
0
0
Super Short Hed: 
Codugh Pays Developers for Every API Call
Featured Graphic: 
Codugh Pays Developers for Every API Call
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Summary: 
Codugh wants developers to directly earn money for their APIs. It's building a marketplace where developers publish APIs. As those APIs are used, regardless of integrated application, the API developer gets paid. Codugh has partnered with Bitcoin SV (BSV) to pay developers per call.

Codugh wants developers to directly earn money for the APIs they create. The company is building a marketplace where developers publish APIs. As those APIs are used, regardless of integrated application, the API developer gets paid. It's a pay per call model, and Codugh has partnered with Bitcoin SV (BSV) to make it happen.

BSV facilitates unlimited scaling and microtransactions, which makes it an ideal platform to roll out Codugh's marketplace. Developers can set their own rates, but it will likely be a few cents, or less, per call. Developers who have success on the platform will rely on thousands, if not millions, of microtransactions to build their compensation. In addition to getting paid in Bitcoin, developers can leverage other platform features including performance badges, ratings, and user feedback.

Getting started with Codugh is three simple steps for developers. First, developers need to create and deploy the API. Second, developers upload the API endpoints into the Codugh marketplace. Third, each time the API is called, the developer is paid in real-time.

Codugh has not let launched. For early access, sign up at the Codugh site. At the site, Codugh also addresses intellectual property rights, authentication, and other frequently asked questions. 

Content type group: 
Articles
Top Match Search Text: 
Codugh Pays Developers for Every API Call

Node.js v14 Has Arrived With Some New API Features

$
0
0
Super Short Hed: 
Node.js v14 Has Arrived With Some New API Features
Featured Graphic: 
Node.js v14 Has Arrived With Some New API Features
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Platform / Languages: 
Featured: 
Yes
Summary: 
Version 14 of Node.js was released on April 14, and it brings several new features, some experimental, which can be a benefit to API providers and consumers. Node.js relies on an internal JavaScript engine called V8 that recently released a new version that is now a part of Node.js 14.

Version 14 of Node.js (the server-side Javascript platform) was released on April 14, and it brings several new features, some experimental, that can be a benefit to API providers and consumers. Node.js relies on an internal JavaScript engine called V8 that's built by Google. V8 recently released a new version, 8.1, that also includes new features. This newest version is part of Node.js 14, and as such, those new features are available in Node.js as well. Let's explore some of these new features.

Optional Chaining and Nullish Coalescing

V8 version 8.1, and thus Node.js 14, includes a new JavaScript language feature called Optional Chaining. Although seemingly trivial, this feature is going to prove a huge benefit to API consumers. To understand what it is, let's first look at a situation where you would need it. When you call a web API, you typically use an SDK built for the API (usually offered by the API provider), or you'll use an HTTP library such as request.js, found at https://github.com/request/request. Some APIs provide a flexible response structure, meaning objects may or may not have the members you need.

For example, you might call an API that returns customer information like this:

{ 
    first: 'John', 
    last: 'Smith', 
    phone: '919-555-1212', 
    office: {
        primary: {
            address: '123 Main St',
            city: 'New York',
            state: 'NY',
            zip: '10010'
        },
        secondary: {
            address: '111 First Ave',
            city: 'Chicago',
            state: 'NY',
            zip: '60007'
        }
    }
}

This object contains an office object, that contains a primary member and a secondary member. If you need to access the zip of the secondary member, you can use code such as this:

console.log(obj.office.secondary.zip);

This only works, however, if all the members you're accessing actually exist. Suppose the API only returns a secondary object when the customer has a secondary office. If the entire secondary object is gone, trying to access its zip member will result in an exception, because the secondary object isn't even present:

TypeError: Cannot read property 'zip' of undefined

If you don't have an exception handler, your entire Node.js program will stop. Previously, the way around this is to pack your code with if statements. There are different approaches, but this is a simple one:

if (obj && obj.office && obj.office.secondary && obj.office.secondary.zip) {
    console.log(obj.office.secondary.zip);
}
else {
    // Handle the situation that the secondary office isn't present
}

With the new version, however, the V8 engine allows you to use optional chaining. Here's how you use it:

console.log(obj.office.secondary?.zip);

No "if" statement is needed. Notice the question mark after the word secondary. This tells node that you're not sure if the office object will contain an object called secondary, and if not, to just halt the processing of the expression and return undefined. So if the secondary object is present, the console.log will print out the zip like so:

60007

And if the object isn't present, it will print out undefined:

undefined

You'll likely still need an "if" statement, but your code will be much simpler. For example, you might need to put the question mark in multiple places in the expression. Then you can try to store the zip in a variable, and then test if the variable contains undefined or not:

var zip = obj?.office?.secondary?.zip;
if (zip) {
    console.log(`The secondary office zip is ${zip}.`);
}
else {
    console.log('No secondary office zip code found.');
}

The advantage to putting the question mark after the initial obj is you don't need to first test if the request returned an object at all. Of course, you'll still need to handle errors appropriately; if the object comes back null because the customer doesn't exist, you'll want a separate error message from simply stating that no secondary office zip code was found. But in any case, your code will be simpler with fewer lines. And fewer lines of code means less chances for bugs.

Another new feature in V8 fits closely with optional chaining is called the nullish coalescing operator. Prior to the latest V8, if you wanted to provide a default value in an expression to use if a member doesn't exist, you could use the logical "OR" operator, like so:

console.log(obj.value || "empty");

This would print out obj.value, unless obj doesn't have a value member, in which case it would print out the word "empty". However, there's a flaw here. If obj.value is 0 or an empty string, "", this line of code will still print out the word "empty". That's because in logical OR expressions, the number 0, the empty string, the value null, and the value undefined all equate to a false value.

The newest V8 provides a new operator that you can use instead. It's called a nullish coalescing operator and it consists of two question marks. It behaves almost exactly like the above OR operator, except only the null and undefined values get passed through to the default value. So you can use this instead:

console.log(obj.value ?? "empty");

Now if the value is 0 or an empty string, you'll get back the respective value, 0 or empty string. Only if the value is null or undefined will you get the default value, in this case the word "empty".

Now go back to the optional chaining feature. The nullish coalescing operator can help you provide a default value, like so:

var custzip = obj.office.secondary?.zip ?? "No secondary zip code";
console.log(custzip);

Here, if the secondary object is present and it has a zip member, the custzip variable will receive the zip code. But if either the secondary object or zip member is missing, the expression to the left of ?? will give back undefined, and the ?? will transform that undefined into the string "No secondary zip code."

Async Local Storage

This next new feature is still in the experimental stage, which means you'll want to try it and test it out, but not use it in production. The feature is called async local storage, and can be a benefit especially to API providers.

The idea behind async local storage is to tie storage to a series of code sections that run asynchronously. If you've used HTTP frameworks for Node such as Express.js, you've likely used a sort of asynchronous local storage, because Express provides its own implementation that's not native to Node.

The idea is that you might have three functions that get called in sequence in response to a single web request in an API provider system. The first one might look up data based on the user making the HTTP request, and save it into a storage area. The second might do another database lookup, and the third might send the combined data back to the user. The tricky part is that another user on the web (or possibly thousands of users) might also be calling your web server at that same time. And each of the three functions needs its own separate storage areas for each request.

We're not talking about session storage here. We're talking about storage per HTTP request. But we still want the storage we're using to be unique among requests, but shared across all three functions.

There are already lots of examples on the web that claim to demonstrate how to use the local storage, but most of them are rather contrived and use setTimeout. Perhaps a better, more useful example would be one that uses the async.js library (found at https://caolan.github.io/async/v3/), and the MongoDB native driver (found at https://github.com/mongodb/node-mongodb-native) available through these npm calls:

npm install --save async
npm install --save mongodb

The following code is a modified version of the async local storage example found in the official docs at https://nodejs.org/api/async_hooks.html#async_hooks_class_asynclocalstorage. But instead of using a setImmediate, it does two actual database lookups. The code uses a common pattern where the database lookups take place inside async waterfall functions. But instead of storing the results of the database in a local variable, it stores the results in a store.

Here's the code:

// Load required modules
const http = require('http');
const { AsyncLocalStorage } = require('async_hooks');
const async = require('async');
const MongoClient = require('mongodb').MongoClient;
const asyncLocalStorage = new AsyncLocalStorage();

function getdata(cb) {
    // Connect to the mongo database server
    MongoClient.connect('mongodb://localhost:27017', function(err, client) {
        // Load the database
        const db = client.db('testdb1');
        // Start the sequence of functions, i.e. "waterfall"
        async.waterfall([
            function (wcb) {
                // Open the customers collection and get one customer
                const coll = db.collection('customers');
                coll.findOne({token:'1111'}, function(err,cust){
                    asyncLocalStorage.getStore().customer = cust;
                    wcb();
                });
            },
            function (wcb) {
                // Open the orders collection and get all
                // orders for this one customer. Convert it
                // to an array as well.
                const coll = db.collection('orders');
                coll.find({customer:
                  asyncLocalStorage.getStore().customer._id})
                  .toArray(function(err,orders){
                    asyncLocalStorage.getStore().orders = orders;
                    wcb();
                });
            }
        ], function() {
            // Final step, close the database and call
            // the callback function
            client.close();
            cb();
        });
    });    
}

// Keep an individual count of each time the
// store is created. Store it in the idSeq variable.
let idSeq = 0;
// Create the web server
http.createServer((req, res) => {
    // Create the store for this HTTP request
    var store = { id: idSeq++ };
    // We wrap the store in a call to "run".
    asyncLocalStorage.run(store, () => {
        // Call our getdata function
        getdata(function() {
            // Send the data back to the consumer
            res.end(JSON.stringify(asyncLocalStorage.getStore()));
        }, 1000);
    });
}).listen(3000);

This code creates a store with each incoming HTTP request by calling asyncLocalStorage.run. Remember, we're still not talking about sessions; the HTTP server is still sessionless. But the store works like a global variable accessible from any of the callbacks that are triggered from within the run call. And you could certainly do as we've done in the past, and simply pass an object around to all the calls, and fill its members accordingly, and send it out with a callback. But this local storage approach takes away the complexity of passing data around, and ensures that the store is always there whenever you need it. To grab it, you just call asyncLocalStorage.getStore().

You can see then how the waterfall functions grab the store anytime they need it. They do a database query and store the results in the store object. Then at the end of the request, the code calls res.end and sends the stringified JSON data back to the caller. Note, however, that I've hardcoded the customer token lookup to 1111. In a "real" application you would get that from a cookie that was likely created with a login, or if it's a REST call, from the URL making the call.

If you want to try this out yourself, here's a script to paste into MongoDB to initialize the data:

use testdb1;
db.customers.insert({ "_id" : ObjectId("5eab167ba24567257179fcbf"), "name" : "John Smith", "phone" : "9195551212", "token" : "1111" });
db.customers.insert({ "_id" : ObjectId("5eab1682a24567257179fcc0"), "name" : "Sally Jones", "phone" : "6165551212", "token" : "1112" });
db.orders.insert({ "_id" : ObjectId("5eab1b156d35168ded88f846"), "customer" : ObjectId("5eab167ba24567257179fcbf"), "sku" : "SKU123", "total" : 12.35 });
db.orders.insert({ "_id" : ObjectId("5eab1b1f6d35168ded88f847"), "customer" : ObjectId("5eab167ba24567257179fcbf"), "sku" : "SKU555", "total" : 25.5 });

Then when you run the server code, in another command prompt you can make multiple calls to the code with this:

curl localhost:3000/
curl localhost:3000/
curl localhost:3000/

With each run to curl, you will get back another object. Look at the id of the object and you'll see it's different each time.

Diagnostic Reports

Version 12 of Node had an experimental feature called Diagnostic Reports. With this new version 14, that feature is now stable, and no longer experimental. That means you're free to start using it in your production code.

Obtaining a diagnostic report is a simple matter of one function call. Save the following code to a file and run the file under node:

function test() {
    process.report.writeReport();
}
test();
console.log('Ready');

The line inside the test function is the single call, writeReport, which saves the diagnostic report to a file. The program continues to run, as you'll see by the line Ready being written to the console.

After you run this, you'll see a new file containing JSON. When you open it you can see a great deal of information, such as the version of Node that's running; information about the computer itself; a full stack trace for where the call to writeReport occurred in the code; full information on the JavaScript memory heap (much like what you can see in Chrome's debug tools); environment variables; and more.

This is only the surface of what's available for diagnostic reports. There are two excellent resources for learning about them; first is a Medium article and second is the official documentation.

Web Assembly

Another interesting feature in Node 14 is an implementation of WebAssembly System Interface (WASI). Like the async local storage, this is also an experimental feature. But if you're building high-performance apps, whether API providers or consumers, you might want to try this out. Node already has a reputation for being blindingly fast, but there's always room for improvement.

We don't have space here to teach you how to use WebAssembly, as it's an entire programming language that runs inside the V8 engine that's been likened to the idea of compiled Javascript. However, what follows is a very quick demonstration of it in action. Be sure to look at the official documentation at https://nodejs.org/api/wasi.html, and especially note the reminder at the bottom of the example that you have to provide some runtime arguments to Node to activate this feature.

First, here's a web assembly program, which I borrowed from this example. https://webassembly.github.io/wabt/demo/wat2wasm/.

(module
  (func (export "addTwo") (param i32 i32) (result i32)
    local.get 0
    local.get 1
    i32.add))

Save this in a file called simple.wat. This code module exports a function called addTwo, which takes two integers and returns the sum.

To compile this code, you'll need some tools that you can find at https://github.com/webassembly/wabt. Follow the steps for installing the tools and adding them to your path. Then you can compile the code like so:

wat2wasm simple.wat -o simple.wasm

This will create a binary Web Assembly file called simple.wasm. Finally, create a file called app.js with the following:

const fs = require('fs');
const { WASI } = require('wasi');
const wasi = new WASI({});
const importObject = { wasi_snapshot_preview1: wasi.wasiImport };

WebAssembly.compile(fs.readFileSync('./simple.wasm'))
.then(function(wasm) {
    WebAssembly.instantiate(wasm, importObject).then(function(instance)
    {
        var addTwo = instance.exports.addTwo;
        var test = addTwo(5,6);
        console.log(test);
    });
});

Run the code with these required command-line parameters:

node --experimental-wasi-unstable-preview1 --experimental-wasm-bigint app.js

This code will load the simple.wasm you built from the wat2wasm tool; compile the code; instantiate a WebAssembly instance with that code, and then save the compiled code into an object called instance. You can access the compiled code through the instance.exports object; in this case, the object has a member called addTwo, the same as the function name in the original WebAssembly code. This JavaScript code grabs that function and saves it into a local variable simply called addTwo. Then it calls the function just like it would any other JavaScript function, except this is compiled WebAssembly code. In this case, it passes 5 and 6, saves the result (11) in a variable called test, and prints out the variable.

Conclusion

Be sure to check out the official announcement from the Node JS team, here. Also check out the official documentation at https://nodejs.org/api/. Although some of these new features might seem trivial, such as optional chaining, they can be pretty useful in APIs, both for providers and consumers. Make sure you pay close attention to what's still considered "experimental" and don't use those features in production code. But practice with them, because they will likely become stable features in a future release of Node, hopefully sooner than later.

Content type group: 
Articles
Top Match Search Text: 
Node.js v14 Has Arrived With Some New API Features

DEV

$
0
0
API Endpoint: 
https://dev.to/api
API Description: 
The DEV community is a platform where developers can share ideas, submit tutorials, and ask questions. The DEV API enables DEV's articles in third parties. Additionally, the API supports comment management, access to listings (classified ads), access to podcasts, and user management. API Key is implemented for authentication, and JSON for requests and responses.
SSL Support: 
Yes
Twitter URL: 
https://twitter.com/thepracticaldev
Developer Support URL: 
https://dev.to/contact
DEV logo
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Supported Request Formats: 
Architectural Style: 
Version: 
0.7.0
Description File URL (if public): 
https://docs.dev.to/api/
Description File Type: 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Pre-release
Direction Selection: 
Provider to Consumer

DEV

$
0
0
API Endpoint: 
https://dev.to/api
API Description: 
The DEV community is a platform where developers can share ideas, submit tutorials, and ask questions. The DEV API enables DEV's articles in third parties. Additionally, the API supports comment management, access to listings (classified ads), access to podcasts, and user management. API Key is implemented for authentication, and JSON for requests and responses.
How is this API different ?: 
SSL Support: 
Yes
Twitter URL: 
https://twitter.com/thepracticaldev
Developer Support URL: 
https://dev.to/contact
Interactive Console URL: 
DEV logo
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
Yes
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Pre-release
Direction Selection: 
Provider to Consumer

Google Ending Support for JSON-RPC and Global HTTP Batch

$
0
0
Super Short Hed: 
Google Ending Support for JSON-RPC and Global HTTP Batch
Featured Graphic: 
Google Ending Support for JSON-RPC and Global HTTP Batch
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Featured: 
Yes
Summary: 
Google first announced its intention to discontinue support for JSON-RPC protocol and Global HTTP Batch in its APIs in 2018. Its original timetable was to discontinue support in March of 2019. That timetable was extended, but the new deadline is fast approaching. The new date is August 12, 2020.

Google first announced its intention to discontinue support for JSON-RPC protocol and Global HTTP Batch in its APIs in 2018. Its original timetable was to discontinue support in March of 2019. That timetable was extended, but the new deadline is fast approaching. The new deprecation date is August 12, 2020.

Starting in February, and running through August of this year, Google has scheduled downtime for both JSON-RPC and Global HTTP Batch to allow users to identify their systems that rely on these features. Google updates the scheduled downtime schedule from time to time. The most up to date schedule will always be posted to this blog post.

For developers looking to test if they rely on JSON-RPC, two options are available. First, send a request to https://www.googleapis.com/rpc. Alternatively, send a request to https://content.googleapis.com/rpc.

Regarding HTTP batch, developers forming homogenous batch requests using Google API Client Libraries, non-Google API client libraries, or no library should migrate. For more details on how to migrate away from these outdated features, follow instructions at Google's blog post announcement.

Content type group: 
Articles
Top Match Search Text: 
Google Ending Support for JSON-RPC and Global HTTP Batch

How to Empathetically Design APIs That Developers Will Love

$
0
0
Super Short Hed: 
How to Empathetically Design APIs That Developers Will Love
Featured Graphic: 
How to Empathetically Design APIs That Developers Will Love
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Contributed Content: 
Yes
Featured: 
Yes
Summary: 
Design is important in many aspects of development which is especially true for development that drives UX. We've learned that API design can have a profound impact on user interfaces and thus user experiences. Poorly designed APIs can lead to awkward, unnatural, or inefficient workflows.

Design is important in many aspects of development which is especially true for development that drives user experience. We've learned that API design can have a profound impact on user interfaces and thus user experiences. Poorly designed APIs can lead to awkward, unnatural or inefficient workflows within a user's experience while a well-designed API can mitigate these issues or at least make it clear to developers what all is technically possible. Ultimately, user interfaces are a representation of the underlying API.

Here are five design, implementation, performance and security best practices to keep in mind when producing APIs.

Best Practice: Make your API a Priority

APIs should be designed and developed with simplicity, consistency and ease of use in mind. Accomplishing these things may be easy in a silo but the consumer's view may be drastically different since their needs or wants were likely not taken into account. It's always important to design and iterate closely with consumers and/or clients before producing any long-term implementation. The best way to do this is by practicing API-First Design which is a design methodology focused on collaboration with stakeholders. Continuing with the alternative may result in an API that conforms to the existing system or simply serves as a conduit to the underlying database which will almost always ignore the client's workflows. A great analogy exists in the construction world – you wouldn't build a house and then draft blueprints.

Additionally, by leading with API design, it's possible to identify API specification format and tooling from the beginning. The API specification should be described in an established format such as OpenAPI, API Blueprint or RAML. An established format is likely to have sufficient tooling that clients are familiar with like Apiary, Redocly or Swagger Hub. Depending on the time gap between API Design and client development, it may be appropriate to consider mocking functionality which most established tools will have. Mocking is a good way to give prospective consumers a tour of example data while the implementation is built out.

Best Practice: Structure with Resource Oriented Design

There is a very common architectural style known as REST that has become the defacto standard on API's for some time now. APIs that are developed using REST architecture are said to be RESTful. In general, REST provides important and well-known architectural constraints but lacks concrete guidance on authoring API's. There is an expanded architectural style known as Resource Oriented Design, satisfying all the constraints of REST, that serves as a good API design reference. If we're to compare REST to SQL, Resource Oriented Design provides similar normalization properties to REST as 2NF and 3NF do for SQL. Here are a few constraints to adhere to:

  • The API URIs should be modeled as a resource hierarchy where each node is a resource or collection resource
    • Resource– a representation of some entity in the underlying application e.g. /todo-lists/1, /todo-lists/1/items/1
    • Collection– a list of Resources all of the same type e.g. /todo-lists, /todo-lists/1/items
  • The URI path– resource path - should uniquely identify a resource e.g. /todo-lists/1, /todo-lists/56
  • Actions should be handled by HTTP Methods

Following these constraints where practical will lead to a normalized API that's consistent and predictable. Also, by leaving actionable verbiage out of URIs, it's easier to ensure that every resource path is a representation of some entity. This is why verbs are frowned upon for RESTful API URIs as they are likely not a representation of some underlying entity. For example, /activate is likely not a resource.

As far as any data resources themselves, there is no universally accepted answer on the format used to represent them. JSON is widely adopted and understood by the majority of platforms. However, XML or other formats may serve consumers just as well or better in certain situations. The key thing is to represent resources in a way that is quick and easy for consumers to understand.

Representing resources in this fashion enables them to "speak" for themselves, known as self-descriptive resources. Self-descriptive resources that are documented using established tooling and open standards will build a sense of trust with the consumer. Ultimately, consumers should buy-in to what the API is selling without additional "fluff" material.

Best Practice: Semantics with HTTP and REST

In order to make an API come to life, it needs to be actionable especially since it will likely be used to inform a client's user interface – the buttons need to do something. Actions, in the context of REST, should be fulfilled by HTTP methods each with an intended purpose which is described here:

  • GET– query/search for resources, expected to be idempotent and thus cacheable
  • POST– most flexible REST semantic, any non-idempotent actions excluding deletes should be handled by this method
  • PUT– used to modify entire resources, yet still expected to be idempotent
  • PATCH– used to make partial modifications to resources, yet still expected to be idempotent
  • DELETE– removes a resource from the API, should be idempotent from the caller's view

Additionally, the result of each action should be returned to the client with a proper status code as defined by the HTTP standard. Your API doesn't need to support all of the standard codes. But it can support more than you think. Here are some common statuses that will likely be required on any API project:

  • Successful response (200 – 399)
    • 200– responses with a body for everything besides a creation action
    • 201– responses for creation actions
    • 202– responses for a long running process
    • 204– responses that don't require a body
  • Client errors (400 – 499)
    • 400– invalid or bad request, appropriate for syntactic errors
    • 401– unauthenticated request due to missing, invalid or expired credentials
    • 403– insufficient permissions (e.g. wrong Oauth scope, requires admin privileges)
    • 404– resource not found (e.g. represented entity does not exist in database)
    • 409– resource conflict (e.g. resource already exists)
    • 422– request is syntactically valid but not semantically valid
  • Server errors (500 – 599)
    • 500– classic internal or unknown errors for modeling exceptional/unrecoverable errors, sensitive errors or errors that can't be elaborated on
    • 501– method not implemented
    • 503– server unavailable
    • 504– timeout

For clarity, an idempotent HTTP method can be called many times with the same outcome. Consumers should be able to understand the information, relationships and operations an API is providing by the resources and methods on them alone. For example, a GET method that creates, or a PUT method that deletes a resource will lead to an unpredictable API fraught with unexpected side-effects. Proper HTTP method and status codes, which naturally includes constraints such as idempotence, used in conjunction with self-described resources are nothing more than factual representations of the underlying business domain.

Best Practice: Maintaining Performance at the API Layer

Building a fast-performing service requires careful thought and design, often across multiple technical boundaries. The list of things to consider for performance can range from proper use of concurrency in the application on down to adequate database indexing. It just depends on the requirements and needs of the business. The API can uphold performance characteristics of the underlying system in multiple ways, here's some to consider:

  • Asynchronous Server Code– Resources that are an aggregate of multiple independent data sources can be built by the result of multiple async execution results.
  • Non-blocking Implementation– APIs that access cold I/O, are CPU intensive or just naturally slow should return with a 202 whilst processing and instruct the client where to get the result. Websockets and callbacks may be appropriate in certain situations as well.
  • Caching– Resources that change relatively slowly can be cached. Idempotent GET requests are the low-hanging fruit. Service level caching may suffice but a content delivery network (CDN) may be needed depending on load.
  • Paging– Collection resources should be outfitted with paging controls for the client to limit results
  • Statelessness– Keeping track of state across requests can be complex and time consuming for the server. Ideally, state should be managed by the client and passed to the server. This applies to authentication/authorization as well. Credentials should be passed on each request. JSON Web Token (JWT) is a good option.

These considerations will go a long way towards meeting satisfactory performance metrics set by the business. Usually, business satisfaction is going to be directly tied to the satisfaction of its consumers. There has been plenty of research to show that a consumer's user experience is tied to the response time of a page. Network latency plays a big factor in page load time so it's important to reduce this time as much as possible. A good rule of thumb is to keep API response time between 150 and 300 milliseconds which is the range for average human reaction time.

Best Practice: Securing your API

Personal and financial information is prevalent on the internet today. It has always been important to safeguard this information. However, there have been many notorious data breaches in recent times that bump security considerations from just another thing to project non-starters without them.

There are two good rules of thumbs when it comes to API security; don't embarrass yourself and don't reinvent the wheel. It's best to leverage open standards such as OAuth or OpenID which both cover most authentication flows. It's also advisable to delegate identity matters to purpose-built identity providers such as Auth0, Firebase, or Okta. Security is a hard thing to get right and the aforementioned vendors have solved this challenge plus gone the extra mile or two. Regardless of the standard and/or provider used, it's always important to apply proper access controls to API resources. Resources that are sensitive should be locked down with appropriate credentials and a 401 should be returned when these credentials are not provided. In cases where a given user does not have adequate privileges to a resource, a 403 should be returned.

The best practices highlighted above are established practices in the industry that should help with any API project. By taking an API-First approach, you will be on the right path to fostering trust with your consumers and stakeholders. Practicing established REST structure and semantics as well as proper security will be huge contributing factors to your success. Maintaining performance will keep consumers and customers coming back for more. Ultimately, API design is part art and part science. Each API will be different and there may be some pragmatic decisions to be made. However, it's critical to not stray too far off the beaten path. It's important to remember that your data and information is what your consumers and customers are truly seeking, not your radical new API design. Following these best practices will get this information to your all-important customers while keeping your consumers informed all along the way.

Content type group: 
Articles
Top Match Search Text: 
How to Empathetically Design APIs That Developers Will Love

How to Install Microsoft's VS Code Source Code Editor On Mac or Linux

$
0
0
Super Short Hed: 
How to Install Microsoft's VS Code Source Code Editor On Mac or Linux
Primary Target Audience: 
Primary Channel: 
Primary category: 
Related Companies: 
Related Platform / Languages: 
Product: 

Since its introduction, Visual Studio Code, often called simply "VS Code", has quickly moved to the top of editor choices by programmers. It's easily one of the most configurable, developer-friendly editors available. Even though it's created by Microsoft, Linux and Mac users have embraced it as well. It's fully open-source, and free, and all of the source code for it is available on GitHub. The editor runs inside a framework called Electron, which is basically a sandboxed version of the Chrome browser. As such, most of the editor's own code is written in JavaScript. The editor is highly extensible, providing thousands of official and community-built extensions supporting different themes, syntax highlighting for nearly every language imaginable, editing extensions, code snippets, most source code control systems, and more. (for version control, Git is supported out-of-the-box.) Developers are encouraged to create their own extensions and share them with the community through the official Extensions Marketplace.

Installing on Linux

There are several ways you can install VS Code on Linux, depending on your distribution.

Installing on Debian and Ubuntu

For Debian and Ubuntu, don't try to use the apt package manager. Instead, follow these instructions.

First, go over to the VS Code Download Page and download the .deb file.

Open up a terminal or shell prompt. Switch to the directory containing your downloads (typically ~/Downloads, which will be the Downloads directory in your home directory).

cd ~/Downloads

Display a list of the directory's contents and check for the name of the file you downloaded:

ls -ltr

The file should be the last one listed, and the filename will start with "code_" and end with ".deb."

The next step requires superuser privileges. To establish your username as a superuser, you'll need to log in as an existing superuser with "root" privileges. Typically, the username is "su" (for "superuser") or "root" and the password is one that you established when you first installed Debian. Once you've logged in as a user with root privileges,run this command:

usermod -aG sudo <username>

replacing <username> with your actual username. For example,

Usermod -aG sudo jeffc

Now, install the .deb file using this command at the shell prompt:

sudo dpkg -i filename

where you replace filename with the name you discovered above.

After that you can delete the .deb file you had downloaded by using the remove command:

rm <filename>

Now you can skip to the section in this article called "Testing it Out."

Technical Note: Although you can use VS Code without knowing this, you might be interested to know that the VS Code installer updates your system's package installer by adding an entry for Microsoft's package repository. Henceforth, VS Code will automatically update itself behind the scenes using your package installer.

Installing on Red Hat, Fedora, SUSE, CentOS

For these other distributions, you can use the package manager by following these steps. First, open up the terminal. (For example, if your distribution has Gnome, you can click the Activities menu, and then search for Terminal.)

As with Debian, you'll need superuser privileges. If you don't already have them, log in as an existing user with root privileges (same as noted in the Debian section above) and run the following command:

usermod -aG wheel <username>

replacing <username> with your actual username. (On these distributions, the "wheel" group has access to sudo privileges, hence the word "wheel" in the command.)

Next, paste the following command into the terminal:

sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

This will import the key file for Microsoft's repository. Then paste and run this command at the shell prompt (make sure you get all the lines, as it's a single command):

sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc"> /etc/yum.repos.d/vscode.repo'

Next paste and run this command, which will update yum's cache:

sudo dnf check-update

And finally, paste the following line, which will install VS Code:

sudo dnf install code

Now you can skip ahead to the section in this article called "Testing it Out."

Installing on Mac

The first step is to download the latest version of VS Code (don't use the Homebrew package manager). Go here and click the Mac version. This will download a .zip file. Click the downloaded file at the bottom of your browser (or open the browser's downloads list and click the .zip file in there.) The Mac's archive utility will open and automatically unzip the file. The unzipped file will be called "Visual Studio Code" and will appear in the Finder in the same directory as the .zip file (likely your Downloads directory). While in Finder, drag that file to the left, into the Applications folder. Now VS Code is installed. However, the path isn't set yet. Normally this is fine, but you'll likely want to open VS Code from the command line in Mac's Terminal application. Fortunately, VS Code includes a macro built right into it that will set the path for you. We'll do that next.

Go ahead and open VS Code by double-clicking its filename in the Applications folder. (The first time you'll see a warning that it was downloaded from the Internet. No problem, just click the Open button that's in the warning box.)

After VS Code opens, click the View menu, and choose Command Palette. (If you like hotkeys, you can instead hold down both Command and Shift, and then press P.) You'll see a text box appear at the top of the screen; this is the Command Palette.

The Command Palette

Start typing the words "Shell Command" and you'll see a dropdown list of commands that start with that. Find the one (it's probably first) called "Shell Command: Install 'code' command in PATH." Click it.

Shell Command: Install 'code'

In the lower-right corner you'll see a message:

Shell command 'code' successfully installed in PATH.

You're good to go! Go ahead and exit VS Code by clicking the Code menu, and then choose Quit Visual Studio Code.

Now open a terminal window. You can use the built-in Terminal program found inside the Mac's Utilities folder (which itself is in the Applications folder), or if you have a favorite one (such as iTerm2) you can use that instead. Then you're ready for the next step, "Testing it Out."

Testing it Out

Now you can test it out. At the shell or Terminal prompt, simply type:

code .

The "." tells VS Code to launch in the current directory. If you like, you can omit it and VS Code will launch in a default directory or the directory you last ran it in. Or you can specify a complete path, e.g.:

code /home/<username>/develop

Note for the bash experts: The "code" command is just a launcher. After using it to launch VS Code, you're immediately returned to the shell prompt. (You don't need to run it in the background with an & after it.)

Here are some quick observations. The code editor panel takes up most of the screen, as you would expect.

The code editor panel

On the left is a pane called the Explorer, which includes quick access to your files. The top is a list of all the currently opened files. (If you don't want that there, you can collapse it by pressing the drop-down arrow to the left of the words "OPEN EDITORS".) Under that is a project tree. For more information on using VS Code and finding your way around, check out the official tutorials, found here.

Setting Configuration Options

Here at ProgrammableWeb, we're building a large set of tutorials that will all use a common set of configurations. To make this as simple as possible, we're going to have a root development directory under which you can save your projects. Because of the number of projects, we suggest opening your code editor so that it points to the current project directory (whatever project you're working on at the time), rather than the root of all the projects. We're also going to use a standard set of configurations and plugins for the editors.

Visual Studio Code has a huge array of options and you have a couple of different ways to control them. First, there's a settings page that lists common settings and places for you to enter in the settings, as shown in the following image. Behind the scenes, however, VS Code stores your settings in a JSON file that you're free to manually edit. What's cool here is that you can add custom configurations in this file that aren't present in the main settings page, and you'll still be able to use the main settings page. It's not strictly one or the other.

To open the main settings page, Start VS Code and do one of the following:

  • Click File -> Preferences -> Settings
  • Click the Gear (found in the lower left) -> Settings
  • Hold down Ctrl and press the comma

You should see the settings page open in VS Code's code editor area, as shown here.

The settings page in VS Code's code editor area

To close the settings page, simply click the close button on the tab at the top.

Note: VS Code allows you to include optional settings on a per-project basis. We're not going to use this feature here at ProgrammableWeb; however, if you want to try it out you can read up on it here.

Tabs vs Spaces
Indenting sub-sections of code such as the internals of an if-then-else statement or a do-while loop is an important habit for ease of readability. Not just for you, but for others that may have to view your code later. Like other code editors, VS Code has an indentation feature. Although this is a hotly debated topic, we're going to set our indentations to spaces instead of tabs for one simple reason: Spaces work better for copying and pasting code from our web pages into the editors. (You are welcome to reformat to tabs afterward if you prefer.) We're going to use four spaces to allow for easier reading of our code.

Language caveat: Unlike most other languages, Python relies on indentation levels and is picky about consistency between spaces and tabs. If you choose to convert any python code from spaces to tabs, you must make sure you do it for the entire file. Otherwise you'll get errors as soon as you run the program.

To set the indentation to spaces, look at the status bar at the bottom of the VS Code window; you'll see the words "Tab Size" or "Spaces" followed by a number. "Tab Size" means you're set up to indent using tabs; Spaces means you're set up to indent using spaces. The number you see is the size of the indentation. If you already see "Spaces: 4" then you're good to go. Otherwise, click "Tab Size" or "Spaces", and a popup will open whereby you can configure the tabs, spaces, and indentation size, as shown below.

To set the indentation to spaces, look at the status bar at the bottom of the VS Code window

Click "Indent Using Spaces" to set your preference to spaces. Next, a similar popup will appear with a list of numbers; choose 4. You can then convert the currently-open file if you like by again clicking either Tab Size or Spaces, and then click "Convert Indentation to Spaces."

Fonts and Sizes
This is up to you, as we all have a different vision and visual needs. VS Code lets you choose the fonts and sizes you prefer; we'll show you how here. Note, however, that because VS Code technically runs inside its own Chromium-based browser, you can set the zoom level, just like in Chrome. This means you use both approaches to get the look and feel to your liking. (Try this: With VS Code open, hold down Ctrl and type a + key, which might require pressing Shift depending on which + key you use. You'll see the entire app zoom just like inside a browser. Reverse it by pressing Ctrl and the minus key.)

To set the font family and font sizes, open up the settings, and on the left of the Settings pane, expand Text Editor; under that, click on Font, as shown here:

To set the font family and font sizes, open up the settings, and on the left of the settings pane, expand Text Editor; under that, click on Font

Unfortunately, the font family name is one area where VS Code isn't particularly user-friendly. The issue is that VS Code runs inside a browser, and as such uses HTML/CSS syntax for font family names. HTML/CSS font families are typically lists of font names surrounded by single quotes (when the font name has spaces), with commas between the font names. The browser starts from the left and goes through the list until it finds the first font that exists on the system.

There's no dropdown list or picker of any kind. You just have to know what fonts are available on your system, and you can type them in here. Fonts on Linux-based systems are usually in /usr/share/fonts (and subdirectories under that) or /usr/local/share/fonts. On the Mac, hold down the Command key and press Space; in the search box that opens type Font. Find "Font Book" in the results list and click it; the Font Book app will open to show you your fonts.

The size is more straightforward. Under Font Size simply type the size of the font you want in the editor.

Recommended Extensions

Our tutorials make use of several languages, and as such, you'll need the plugins to support these languages. By default, VS Code supports all of the languages we're using; however, the VS Code marketplace includes many free extensions that offer additional functionality for these languages, such as extra syntax highlighting and navigation, code linting, and more.

You can also access the marketplace by simply searching inside VS Code. To do so, do one of the following:

  • Click File -> Preferences -> Extensions
  • Click the Gear (found in the lower left) -> Extensions
  • Hold down Ctrl and Shift and press X

On the left side you'll see a list of extensions with a search box at the top, as shown below.

On the left side you'll see a list of extensions with a search box at the top

Try typing "C#". (You don't need to press enter.) Below that, in the left pane, you'll see a huge list of free extensions, some built by Microsoft and its partners, and some built by the community members. To see details of an extension, click on it, and a full-page description will open in the editor pane of VS Code. There, you can find user reviews (which we encourage you to read and consider) and the number of people who installed it (a good indicator of the quality of the extension; some have several million downloads, which suggests a strong approval). The better extensions will also include full instructions on their detail pages on how to use the extension. You can install an extension by clicking the little green "Install" button by the extension's title in the left pane, or by clicking the green Install button at the top of the detail page.

Important: Many extensions require you to restart VS Code after installing them. The green Install button will be replaced by a blue button with the words "Reload Required." You can click that button to restart VS Code. (All your existing editor windows will remain open.)

You can also install extensions right from the Marketplace page in the browser. Search for an extension, click on it to open its details page, click the "Install" button; your browser will download the extension and open it in VS Code, where it will be installed.

To remove an extension, the green Install button in VS Code gets replaced by a gear icon. Click it for a menu, and in that menu click "Uninstall." Then you'll see the familiar blue "Reload Required" button that you'll want to click.

Here are the extensions we recommend you install, listed by language, along with instructions on how to install them. You can install all of these, or only those for the languages you prefer. These extensions offer syntax highlighting, popups for suggestions as you're typing (known as Intellisense), and more. Each extension includes documentation when you open the details page showing all the features the extension provides.

Java: There are many Java extensions available; we recommend browsing the list by simply searching the Marketplace for "Java." At the very least, we encourage you to install two that Microsoft created, building on work by Red Hat. One is a full Java extension pack, found here, and the other is an integrated debugger, found here.

Node.js and JavaScript: VS Code by default has strong support for node.js and therefore JavaScript (which is the language of node.js). As such, we don't ask you to install any particular node.js extensions. (However, the community has built several that you are welcome to explore; simply type "node" into the Marketplace search box.)

TypeScript: As TypeScript was invented by Microsoft, VS Code also has strong native support. Note that there is, however, an official extension built by Microsoft called JavaScript and TypeScript Nightly. The purpose of this extension is to allow you to stay on top of the nightly updates to the TypeScript language. We only recommend installing this if you're a very serious TypeScript developer who wants to stay on the bleeding edge.

Python: Microsoft's official Python extension can be found here. Important for Mac: If you're a Mac user, you need to install a separate version of Python from the one that ships with MacOS. You can find the details here. (You're still free to use the version that ships with Mac; however, the plugin won't integrate with that version.)

C#: Microsoft has partnered with another company called OmniSharp to create the official C# extension. You can find it here.

Go (also known as golang): The official maintainers of the Go language (who work at Google) have built a VS Code extension for Go, which you can find here.

PHP: There isn't an official PHP extension, but there are several that millions of people have installed. For additional Intellisense features, go here. For PHP debugging features, install this extension.

Dart: There is no official Dart extension by either Microsoft or the Dart language team; however, there is an extension built by community members that has wide support, with over 1.5 million downloads. We therefore recommend this plugin.

C++: Microsoft's official C++ extension can be found here. This extension supports many different C++ compilers, not just Microsoft's. Specifically, for Linux, the support is for GNU Compiler Collection (gcc), and for MacOS the support is for Clang (which is included with the XCode IDE for Mac).

Rust: The official Rust language maintainers created an extension for Rust.

Ruby: Although we don't use Ruby often here at ProgrammableWeb, if you're into it, we recommend this extension. Please pay close attention to the instructions regarding enabling the language server.

SQL: There are many SQL database server extensions available. Microsoft has an official one for SQL Server and one for PostgreSQL. There is limited support for other servers (such as MySQL or Oracle), but by default, VS Code provides syntax highlighting for SQL files.

Other languages: This list is all the languages we use (or intend to use) in ProgrammableWeb's tutorials. But there's an extension for nearly any programming language that exists. For example, if you use Ada, there's an extension for that. Or Fortran, or even COBOL. If you have a favorite language not listed here, enter it in the Marketplace search box and there will likely be an extension for it. (And if not, you can build one.)

Launching the Integrated Terminal

Visual Studio Code includes an integrated terminal, allowing you to run shell commands right from within VS Code. The terminal works the same way as any other terminal; it includes a prompt that shows you your current directory. You can type all the usual bash commands (such as "ls" and "cd") as well as git commands if you're using git. You can learn more about it here. The terminal opens below the code editor as shown below.

Visual Studio Code includes an integrated terminal, allowing you to run shell commands right from within VS Code

To create a new terminal, do one of the following:

  • Click the Terminal menu -> New Terminal
  • Click the Terminal menu -> New Terminal
  • Press Ctrl+Shift+` (that's the back-tick character, usually to the left of the 1 key).

Optional: Integrating with git

Git is by the most used source code control software right now. We have decided to use git for our source code control, in conjunction with the website GitHub, which hosts code that's stored in git. You are not required to use git to use our samples; you can even download the samples as .zip files from our respective GitHub project pages and not use git at all. But if you choose to use git, you can easily submit suggestions for improvements to our code.

By default, VS Code comes with full git integration. As such, we do not recommend downloading any additional extensions to use our sample code. Note, however, that we typically do not recommend only trying to use git from within VS Code. Our reasoning is that most developer teams typically still use the command-line version of git, and many tasks are, quite frankly, a bit more intuitive to use at the command line rather than inside VS Code. Therefore we're taking a two-pronged approach to git:

  1. When you're editing code that is inside a git repo, note the letters that appear beside your files in the Explorer as you add and modify files (see the screenshot below). The files that are untracked (i.e. not added to the repo) have a letter U to the right of them. Files that are tracked and you've changed get an M (for "modified") next to them. Files that are tracked by git and haven't been modified get no icon.
  2. When you need to do any git work (e.g. create branches, add and commit files), we recommend using the command-line. (You're welcome to use either the integrated terminal for its ease of access, or you can do it in an OS shell such as bash.)
By default, VS Code comes with full git integration

Our future projects here at ProgrammableWeb will have freely-available repos on Github. (And we welcome pull requests with suggested changes; we're making this a community-driven project.)

Summary: 
Since introduction, Visual Studio Code, often called simply "VS Code", has quickly moved to the top of editor choices by programmers. It's easily one of the most configurable, developer-friendly editors available. Even though it's created by Microsoft, Linux and Mac users have embraced it as well.
Content type group: 
Articles
Top Match Search Text: 
How to Install Microsoft's VS Code Source Code Editor On Mac or Linux
Includes Source Code: 
0

Panthera-Jekyll

Cloudflare Introduces Cloudflare Workers Unbound

$
0
0
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Summary: 
Cloudflare announced Cloudflare Workers Unbound. Workers Unbound is a serverless platform for developers with a specific focus on flexibility, performance, security, ease of use, and pricing. Users can run complicated workloads across the Cloudflare network and pay only for what's used.

Cloudflare has announced Cloudflare Workers Unbound. Workers Unbound is a serverless platform for developers with a specific focus on flexibility, performance, security, ease of use, and pricing. With Workers Unbound, users can run complicated workloads across the Cloudflare network and pay only for what's used.

“I challenged our team to build a platform that didn't just compete with niche edge computing solutions, but would provide developers the fastest, most secure, most flexible, and most cost-effective general-purpose serverless offering — period," Matthew Prince, Cloudflare co-founder, and CEO, commented in a press release. "I'm incredibly proud of Cloudflare Workers Unbound and can't wait to see what developers will build with it.”

Cloudflare Workers Unbound is the next progression beyond Cloudflare Workers, which came out in 2017. The company pitches Workers Unbound as a platform that serves more use cases with more flexibility and better cost. Specific benefits the company points to include:

  • Limitless: Limited CPU restraints and only pay for what you use.
  • Cost-Effective: Very competitive pricing when compared to the industry
  • No Hidden Fees
  • No Cold Starts: Out of the box support for 0 nanosecond cold start times.
  • Unthrottled CPU:  Isolated architecture lets Cloudflare run CPUs unthrottled so users can get more done per second of compute time.
  • Fast Globally: Workloads run across the Cloudflare network, spanning more than 200 cities in more than 100 countries, reducing average network latency for users everywhere in the world.
  • Instant Updates: Developers can update their code and have it live globally in 15 seconds.
  • Broad Language Support: JavaScript, C, C++, Python, Go, Rust, Scala, Kotlin, and even COBOL.
  • Automatic Scaling: Cloudflare Workers Unbound automatically scale to meet demand without developers needing to spinning up new instances.
  • Robust Debugging Tools: Simplified debugging and diagnosing problems.
  • Secure By Design: Built to withstand the latest security threats, including sophisticated timing attacks, and was reviewed by the team that discovered the Spectre class of vulnerabilities.

Cloudflare Workers will now be known as Cloudflare Workers Bundled. Cloudflare Workers Unbound is currently in private beta. Those interested can learn more here.

Content type group: 
Articles
API RoundUp: 
0

Prompty Server

$
0
0
API Endpoint: 
https://app.prompty.io/api/
API Description: 
The Prompty Server API enables developers to send web notifications using HTTP POST methods. JSON formatted payloads are required. Prompty offers tools to engage subscribers via automatic welcome notifications. Prompty supports notification reports to determine performance, and to see how many people received the notification, who read it, and who clicked on it.
SSL Support: 
Yes
Developer Support URL: 
https://www.prompty.io/contact-us/
Prompty Web Push Notification
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
Yes
Supported Request Formats: 
Architectural Style: 
Version: 
1.0
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Prompty Server

$
0
0
API Endpoint: 
https://app.prompty.io/api/
API Description: 
This API allows users to send notifications as well as retrieve other useful information, such as subscriber data and stats, from the Prompty web push notification service.
How is this API different ?: 
This is the official API for the Prompty web push notification service.
SSL Support: 
Yes
Developer Support URL: 
https://www.prompty.io/contact-us/
Interactive Console URL: 
Prompty Web Push Notification
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Recommended (active, supported)
Direction Selection: 
Provider to Consumer

Facebook Introduces v8.0 of Graph and Marketing APIs

$
0
0
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Related APIs: 
Facebook Graph
Summary: 
Facebook just announced v8.0 of its Graph and Marketing APIs. Per usual, the new releases include some breaking changes, feature updates, and deprecations. The company is pointing developers to the Platform Initiatives Hub to stay up to date with all company plans and programs.

Facebook just announced v8.0 of its Graph and Marketing APIs. Per usual, the new releases include some breaking changes, feature updates, and deprecations. The company is pointing developers to the Platform Initiatives Hub to stay up to date with all company plans and programs.

Three changes require developer action. By October 24th, developers must leverage a user, app, or client token for querying the Graph API specifically for profile pictures via UID, FB OEmbeds and IG OEmbeds. Second, granular permissions will soon be required for an app to access the business field. By November 2nd, apps need to start requesting granular business_management permissions to the business that owns the ad. Finally, catalog_management and ads_management permissions are being decoupled. By January 31st of next year, developers with access to catalog_management need to prompt users to grant access through the FB Login Dialog.

On the improvements front, business app developers now have better onboarding options and a new reviewable feature called Business Asset User Profile Access. Starting in October, Facebook will move from target cost bidding to cost cap bidding to manage campaign costs.

The Marketing API, versions 5.0 and 6.0 will be removed on September 28th. The Graph API, v3.1 will be removed on October 27th. A number of API endpoints will be deprecated on November 2nd. Check out the Changelog for more details.

Content type group: 
Articles
API RoundUp: 
0

3Cols

$
0
0
API Description: 
3Cols is a cloud based snippet manager allowing you to share code snippets with a team enabling greater productivity and encouraging reusable code. The 3Cols API allows adding, editing and deleting of your snippets.
SSL Support: 
No
Twitter URL: 
https://twitter.com/3_cols
3Cols
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Supported Response Formats: 
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Supported Request Formats: 
Architectural Style: 
Version: 
1.0.0
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other(not listed): 
0
Version Status: 
Active (supported, scheduled for retirement)
Direction Selection: 
Provider to Consumer

3Cols

$
0
0
API Endpoint: 
API Description: 
3Cols is a cloud based snippet manager allowing you to share code snippets with a team enabling greater productivity and encouraging reusable code. The 3Cols API allows adding, editing and deleting of your snippets.
How is this API different ?: 
SSL Support: 
No
Twitter URL: 
https://twitter.com/3_cols
Interactive Console URL: 
3Cols
Support Email Address: 
Primary Category: 
Secondary Categories: 
API Provider: 
Device Specific: 
No
Is This an Unofficial API?: 
No
Is This a Hypermedia API?: 
No
Restricted Access ( Requires Provider Approval ): 
No
Architectural Style: 
Version: 
-1
Description File URL (if public): 
Is the API Design/Description Non-Proprietary ?: 
No
Other(not listed): 
0
Other Request Format: 
Other(not listed): 
0
Other Response Format: 
Type of License if Non-Proprietary: 
Version Status: 
Pre-release
Direction Selection: 
Provider to Consumer
Viewing all 432 articles
Browse latest View live