Quantcast
Channel: ProgrammableWeb - Node.js
Viewing all articles
Browse latest Browse all 1601

Getting Started with Google Cloud Functions

$
0
0
Primary Target Audience: 
Primary Channel: 
Primary category: 
Secondary category: 
Related Companies: 
Related APIs: 
Google Cloud Functions
Related Platform / Languages: 

Serverless architectures have been gaining wide traction among developers over the last couple of years. They're being promoted among developers as a way to focus on their code while letting the infrastructure vendor take care of running that code for them. It's not that serverless paradigms haven't been there in the past. Platforms as a Service (PaaS) have been around for years now and while they do achieve the goals to a large extent, the recent serverless platforms released by large vendors have been significantly different both in terms of features, what the developers need to do, and how they're priced.



Amazon Web Services (AWS) has been off the blocks for a couple of years now with their AWS Lambda offering. The last six months have seen the design of multiple reference architectures around AWS Lambda with close integration with other AWS Services. Microsoft has Azure Functions, its serverless offering, and IBM recently introduced its OpenWhisk serverless solution under the Bluemix brand. This article is a technical tutorial on getting started with Google Cloud Functions, the serverless offering on the Google Cloud Platform (GCP). Google Cloud Functions finally entered beta at the recent Cloud Next 2017, Google's annual cloud conference.

What is Google Cloud Functions?

Google Cloud Functions, like other offerings, gives you the ability to define your computing in short snippets of code, which you can deploy and then run in response to events. As of its beta release, the only supported language for writing Cloud Functions is JavaScript.



High-level features of Google Cloud Functions are listed below:

  • Support for turning existing logic into callable functions that can optionally exist for only as long as the logic is active.
  • Cloud Functions are written in JavaScript and execute in a Node.js v6.9.1 environment on the GCP.
  • The billing occurs based on the number of invocations and the execution time, down to 100s of milliseconds. This provides immediate opportunities to organizations that seek alternative methods to autoscale while minimizing chances of overcapacity and any associated costs.
  • Integrated with other GCP Services like Cloud Datastore, Cloud Pub/Sub, and more.
  • Directly trigger Cloud Functions via a HTTP webhook. This is useful in triggering the function from external services.
  • Use Cloud Functions directly from Firebase.

Getting Started with Cloud Functions

Let's get started. Now, you'll write your first function with Google Cloud Functions and execute it.



Google Cloud Functions executes the function based on events. Events can be triggered as a result of a database update in Cloud Datastore, a message being published in Cloud Pub/Sub, a blob getting uploaded to Google Cloud Storage, an event from Firebase, and more. These are called background functions, where they'll be executed in response to an event, which acts as a trigger. We'll take a look at various triggers that Google Cloud Functions supports a bit later in the article.



Google Cloud Functions also supports a foreground function (HTTP invocation). For example, you can directly invoke this function via a HTTP URL that's unique to your deployed Cloud function. Let's take a look at that.



Before you start, assume:

  • You have a GCP project
  • You have enabled Project Billing
  • From the API Manager, you have enabled the Cloud Functions API

To perform the above tasks, please refer to the official documentation on Creating a Project, Enabling Billing and Enable/Disable APIs in your Google Cloud Platform project.



Cloud Functions is available from the main menu on the left of the screen as shown below. You'll find it in the Compute → Cloud Functions section.

Accessing Google Cloud Functions via the main menu, in the Compute > Cloud Functions section



Once you click on that, you'll be taken to the Cloud Functions functionality, and if you haven't deployed any Cloud Functions previously, you should see a screen that looks as follows:

Cloud Cloud Functions functionality overview



Click on Create function. This will take you to a form, where you can not only fill out the function details, but also use the inline editor to write your Cloud Function. Let's look at the form details in two parts, the first of which is shown below.



You'll provide a name for the function, use other defaults, and select the Trigger as HTTP Trigger option. Notice that the unique HTTP URL for your function is shown, too. Syntactically, it conforms to the following format:

https://<region>-<projectid>.cloudfunctions.net/<function-name>
Cloud Functions name and format



The next step is to write the source code for your function in an inline editor, so select Inline Editor for Source Code option in the Create function screen. This will show a helloworld function template in the inline editor, as given below:

Inline Editor for Source Code option in the Create function screen.



The inline editor is a convenient way to write and test out Cloud Functions but may not be the ideal way to write code in the long term. For that, you can use the source code editor of your choice (e.g., Atom, Sublime, etc.) and upload your source code as a ZIP file. You can also deploy it from the Cloud Source Repository, a private Git repository that is provided with each Google Cloud Platform project

Continued from page 1. 



Note: The official Google Cloud Functions documentation states, "Google Cloud Functions are written in JavaScript, and execute in a Node.js runtime. When creating a Cloud Function, your function's source code must be exported in a Node.js module. The Cloud Functions Node.js execution environment follows the Node "LTS" releases, starting with the v6 LTS release published on 2016-10-18. The current Node.js version running in Cloud Functions is Node v6.9.1."



An important field that you need to specify in the Create Function configuration screen is a staging bucket, which is where the code will get staged before you move it into production. The bucket will contain all your source code and package.json files that will be used by the platform to deploy your function. Specify one and leave the function as helloWorld, which is the exported function from your source code; because you've not modified the template, it remains as helloWorld.



Click on Create in the Create Function configuration screen. This operation will take a while, and once it is done you should see your function listed with the green signal:

Create Function configuration screen.



Click on the function name and it will lead you to the Function Details page, where you can see the details for the function, see the triggers defined, view source code, and even test out the function.

Function Details page



Click on the Testing link. You can test out your function from here or alternatively if you want, you can use a API testing tool like Postman to invoke the function directly. In the example above, the source code includes the JSON key:value pair {"message": "Hello!"} as the example input to the function. However, for the purposes of this tutorial, you'll use the JSON key:value pair {"message": "Hello ProgrammableWeb!"} as the triggering event:

Test out your function via the Testing link: triggering event



Click on Test the function button, which will invoke your function and send the data. The result is the  following response:

Test out your function via the Testing link: output/response





You can also view the logs for the function. As mentioned, try to invoke the HTTP URL for the function directly. The source code execution path should correctly go into the undefined message conditional path and it should display the message as shown below in the browser:

View the the function logs by invoking the HTTP URL for the function directly.



This completes this quick tour of Google Cloud Functions.

Cloud Functions Triggers

The key to functions is to understand when to invoke them. You've just seen a direct way of triggering a Cloud Function by invoking the function's unique HTTP Trigger. However, in your existing cloud application you are more likely to use other ways to trigger the execution of your Cloud Functions. Usually this is an asynchronous mechanism that is enabled via Events. These Events would occur in other Cloud services that your application might be using. For example, say you've written a Cloud Function that sends out an email. This email function can be triggered if a new record is written to the registration collection in a database. So, in this use case, what you'd need essentially is a way for your functions to execute automatically in response to a database insert.



Google Cloud Functions supports multiple types of triggers for your functions. At the time of writing, the list of triggers supported is given below:

This list is not as comprehensive as other competitive platforms but it will grow over time as the Google team figures out the best approach to integrating various services across its Google Cloud offerings.



You should think of the above services as Event Providers. At the same time, the list of supported services or APIs that Cloud Functions can access and invoke across the GCP is much more. It includes other data services like Spanner, BigQuery, and other machine learning APIs. Google provides a page that details the entire range of supported APIs.

Building a Complete Application

You'll now build out an application that will use Google Cloud Functions to construct a serverless pipeline to process files uploaded to Google Cloud Storage. The files uploaded will be in English and you'll use the Google Translate API to translate the file contents into Spanish and then write them back to Google Cloud Storage (GCS). You'll do all this via Google Cloud Functions and use the gcloud JavaScript SDKs for Google Cloud Storage and Google Translate API.

Solution Architecture

The solution architecture is:

Solution Architecture



Let's go through the entire workflow and then you can dive into setting up the project and developing your function. The steps are numbered in the diagram above:

  1. The user uploads the files to be translated directly into the Google Cloud Storage (GCS) bucket that has been designated for this. The content in these files is in English. For the purpose of this demonstration, assume that the files are in English and you're not going to do any validations for detecting the language used or if they're text files.  Additionally, let's assume that the files are small in size (only a few KB and not MBs) because that could have a bearing on the JavaScript code that you'll write in the function.
  2. You'll configure a Google Cloud Function with its Trigger set as Google Cloud Storage (GCS) and specify the bucket to monitor, which will be the same bucket that contains our uploaded files. When a file is uploaded into the bucket, the function is invoked with the details about the object (i.e., file uploaded). This is done for each file that is uploaded.
  3. The Cloud Function will read the contents of the file and invoke the Google Translate API to get the translated text. You'll then translate the English text to Spanish. To do this, you'll e use the Google Cloud Translate Node.js API.
  4. Using the Google Cloud Storage Node.js API again, the translated text is then written into a new object that's saved to Google Cloud Storage (GCS).

Google Cloud Platform Setup

You can set up a new GCP project with billing enabled or you can use the previous project that you used to test Google Cloud Functions.

Continued from page 2. 



For the project, make sure you've enabled the:

  • Cloud Functions API
  • Cloud Storage API
  • Translation API

Creating Google Cloud Storage Buckets

You're going to create the two storage buckets shown below:

Creating Google Cloud Storage Buckets

The first bucket (cf-translation-bucket) is where you'll upload the files to be translated.  The other bucket (cf-translation-bucket_es) is where Google Cloud Functions will save the translated files.

Creating a New Google Cloud Function

Create a new Google Cloud Function the same as you did above.. This time make sure that the Trigger type that you select is Cloud Storage bucket. Make sure you select the correct bucket; the cf-translation-bucket as shown:

Creating a New Google Cloud Function





The Google Cloud Function code (index.js) is shown below. You can visit the Source tab and then click on Edit link. This will bring up the inline editor where you can paste the code shown:

'use strict';
const Storage   = require('@google-cloud/storage');
const Translate = require('@google-cloud/translate');

// Instantiates a client
const storageService = Storage();
const translateService = Translate();

function getFileStream (file) {
 if (!file.bucket) {
   throw new Error('Bucket not provided. Make sure you have a "bucket" property in your request');
 }

 if (!file.name) {
   throw new Error('Filename not provided. Make sure you have a "name" property in your request');
 }

 return storageService.bucket(file.bucket).file(file.name).createReadStream();
}

function getWriteFileStream (bucketname,filename) {
 return storageService.bucket(bucketname).file(filename).createWriteStream();
}

exports.processFile = function(event, callback) {
 const inboundFile= event.data;
 console.log('Processing file: ' + inboundFile.name);

 if (inboundFile.resourceState === 'not_exists') {
   // This is a file deletion event, so skip it
   callback();
   return;
 }

 let inboundFileStream= getFileStream(inboundFile);
 let translatedFile = getWriteFileStream('cf-translation-bucket_es',inboundFile.name);

 let dataLength = 0;
 let dataContents = "";
 //Read the contents
 inboundFileStream
 .on('error', function(err) {})
 .on('response', function(response) {
   // Server connected and responded with the specified status and headers.
  })
 .on('data',function (chunk) {
   dataLength += chunk.length;
   dataContents += chunk;
 })
 .on('end', function() {
   console.log(dataContents);
   translateService.translate(dataContents, 'es', function(err, translatedText) {
   if (!err) {
     console.log(translatedText);
     translatedFile.write(translatedText);
     translatedFile.end()
     callback('','Translation Done');
   }
   });
 })
};



You will see the link to package.json file. Click that. This file contains the libraries that our code will need. The libraries and their versions are mentioned in the file as shown:

"name": "translation-cloud-function","version": "1.0.0","dependencies": {"@google-cloud/storage": "0.7.0","@google-cloud/translate":"0.6.0","request": "2.79.0"
   }
}

Let's go through the code in brief:

  1. The function that you're exporting is the processFile function. This function contains the bulk of the operation. Earlier we configured that the function with the trigger type as Cloud Storage bucket. This function will get invoked when any file operations happen in that bucket (added, deleted, etc).
  2. The first thing that we check in the processFile function is to check if the newly added file object still exists (in other words, that it has not been deleted). If that check is successful, it uses the Google Cloud Storage API to use the bucket and object name to get a FileStream.
  3. We will use the FileStream object to read the contents of the file to be translated. You'll go through the contents of the file and read that into the variable dataContents. Then you'll pass dataContents to the Google Translate API and load the variable translatedText with the result.
  4. The result is then written to another object on GCS via the WriteStream. Note that we write the file with the same name but to a different bucket (cf-translated-file_es). This result will be a Spanish translation of the original text content that was uploaded.

Testing the Function

To test out the function, have on hand some sample text files in English translate to Spanish. From your local machine, you can use the Google Cloud Console and go to the Cloud Storage to upload the files in the correct bucket (the cf-translation-bucket). For example, shown below are a couple of sample files uploaded into the bucket:

Using the Google Cloud Console to go to the Cloud Storage to upload the files in the correct bucket



The Google Cloud function will then be invoked. When you test your function, the execution log is made available to you. You can visit the Logs for the function, where you can see the function executing, the statements that you're printing to the console, and also the execution time.  You will see the link to the logs at the bottom of the screen in the Testing tab. An execution from this setup is shown below:

Execution logs for the function



Once the function executes successfully, you'll see the translated files produced in your target bucket (the cf-translation-bucket_es) as shown below:

The translated files are produced in your target bucket



You can download and view the files for successful translation. This completes your appliction.

Pricing

One of the key benefits of moving to a serverless model is that of price. You're charged by the number of invocations of the function and the resources that the function uses in terms of time and network. This is a very different model from the typical Infrastructure as a Service (IaaS)-based approach where a full-blown server, such as a server based on Amazon's EC2 IaaS, must be provisioned to support your code (even if only for a few minutes via DevOps APIs). With it, you're saddled with the fees associated with that server, it's storage, and other resources. Additionally, if servers are being dynamically provisioned and de-provisioned to service function-like code, there's a performance penalty associated with the provisioning part of the workflow. Recent case studies have indicated significant price benefits of the serverless model.



Google offers a web page detailing Google Cloud Functions pricing. It provides multiple examples to help you understand the economics. The fee for Cloud Functions is determined by the following factors:

  • Number of invocations: The first 2 million invocations per month are free. After that you're charged at $0.40/million invocations.
  • Compute time: This is the time taken to execute your function, from the time it receives a request to completion. It's rounded off to the closest 100ms. This compute time is charged based on the memory and CPU resources that you use. The free tier provides 400,000 GB-seconds, 200,000 GHz-seconds of compute time
  • Network egress: The outbound data transferred out from your functions to other services hosted elsewhere is charged. The first 5GB per month is free and after that you are charged at $0.12/GB.

The pricing details given above include the generous Always Free Tier announced recently at Cloud Next 2017. To summarize:

Google Cloud Functions: Looking Ahead

The beta release of Google Cloud Functions is a welcome move by Google because developers were anxious to see what GCP's response to serverless platforms like Amazon's Lambda would be; especially given the fact the serverless offerings from other IaaS providers have been around for a while.



While serverless platforms are often being touted in the press as being the way most applications will be developed and delivered in the near future, it's not a solution that maps well to the way  most developers design their applications. It will require not just the re-architecture of your existing application into the serverless mode, but you could end up with a non-optimal way of architecting your application.



Having said that, in its current state, Google Cloud Functions needs to definitely look at the following areas to make it a more complete offering:

Support for More Languages

Google Cloud Functions currently supports only JavaScript. While JavaScript is a popular language, it's important to add support for more languages. Doing so will bring in developers, who have strong preferences for one language over the other. This is particularly important because developers might be looking at existing code in current applications that they might want to move to functions, in which case they might go with the language that they have currently used to build the application. AWS Lambda provides first-class support for JavaScript, Java, Python, and C#.

API Gateway

The lack of an API Gateway in the current GCP offering needs to be plugged as soon as possible. An API Gateway is an essential component of an public API offering. It provides management of keys, throttling, centralized security, logging, and more. In Google's recent acquisition of Apigee, it also acquired an API gateway. So, don't be surprised when Google makes available by default a flavor of the Apigee API Gateway to manage both its Cloud Endpoints solution and Cloud Functions. A unified management layer like this will push this solution to the next level and inline with what developers expect from an API offering.

Additional Triggers

One of the key factors in understanding the range of solutions that can be defined by a serverless platform on a given cloud platform are the triggers. A trigger, as you saw earlier, is an event that ends up executing the function. Google Cloud Functions currently provides triggers from various other services like Google Cloud Datastore, Cloud Pub/Sub, Google Cloud Storage, Firebase Events, and even a direct invocation. The number of triggers is still sparse compared to the triggers provided on similar services from Amazon Web Services and Azure. Additional triggers will mean integration with other services and that would be a great addition to the service.

Reference Architecture Implementations

Google Cloud Functions needs to highlight more reference implementations that show how developers, those both new to the platform and existing GCP users, can move some of their existing functionality to it. Because serverless programming is a radically different way, a more pragmatic set of solutions from Google can help developers make the choice. Currently this sort of prescriptive information regarding migration is just too sparse, leaving developers to fend for themselves.



In closing, Google Cloud Functions is a solid and stable beta release, offering serverless development on GCP. It currently integrates well with other GCP services Cloud Pub/Sub, Cloud Datastore, Firebase, and more. If you're invested in GCP, look at how you can potentially move some of your computing to Cloud Functions and with a different billing model, it might aid in your projects from both a cost perspective and the way you develop and deploy key parts of your application.

Summary: 
Serverless architectures have been gaining wide traction among developers over the last couple of years. This article is a technical tutorial on getting started with Google Cloud Functions, the serverless offering on the Google Cloud Platform (GCP) which entered beta at the recent Cloud Next 2017.
Content type group: 
Articles
Top Match Search Text: 
Getting Started with Google Cloud Functions
source code: 
0

Viewing all articles
Browse latest Browse all 1601

Trending Articles