Académique Documents
Professionnel Documents
Culture Documents
Abstract
This white paper introduces Microsoft Codename Cloud Numerics lab (referred to as Cloud Numerics in the following content), a numerical and data analytics library. It provides guidelines for data scientists and others who write C# applications to enable their applications to be scaled out, deployed, and run on Windows Azure.
Copyright Information
This document supports a preliminary release of a software product that may be changed substantially prior to final commercial release, and is the confidential and proprietary information of Microsoft Corporation. It is disclosed pursuant to a non-disclosure agreement between the recipient and Microsoft. This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user. Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
Microsoft, Windows, Windows HPC Server, Visual Studio, and Windows Azure are trademarks of the Microsoft group of companies.
Contents
1 Introduction ......................................................................................................................................... 4 1.1 1.2 1.3 1.4 2 A Hello, World example in Cloud Numerics ............................................................................. 5 What Cloud Numerics is not ....................................................................................................... 5 What this document does and does not cover .............................................................................. 6 High-level overview ....................................................................................................................... 6
The Cloud Numerics Programming Model .................................................................................... 8 2.1 Arrays in Cloud Numerics ........................................................................................................... 9 Array element types .............................................................................................................. 9 Indexing and iteration ............................................................................................................ 9 Array broadcasting ................................................................................................................ 9
Local arrays in Cloud Numerics ................................................................................................ 10 Underlying storage .............................................................................................................. 10 Array factory type ................................................................................................................ 10 Interoperability with .NET arrays ......................................................................................... 10 Indexing and iteration with local arrays ............................................................................... 11 Other operations.................................................................................................................. 11
Distributed array types ................................................................................................................ 13 Working with distributed arrays ................................................................................................... 13 Conversion between local and distributed arrays ............................................................... 13 Indexing distributed arrays .................................................................................................. 13 Other distributed array operations ....................................................................................... 14
The Cloud Numerics Runtime Execution Model ........................................................................ 15 3.1 3.2 Application development, debugging and deployment ............................................................... 15 Process model............................................................................................................................. 15
4 5
1 Introduction
Cloud Numerics is a new .NET programming framework tailored towards performing numericallyintensive computations on large distributed data sets. It consists of a programming model that exposes the notion of a partitioned or distributed array to the user an execution framework or runtime that efficiently maps operations on distributed arrays to a collection of nodes in a cluster an extensive library of pre-existing operations on distributed arrays and tools that simplify the deployment and execution of a Cloud Numerics application on the Windows Azure platform
Programming frameworks such as Map/Reduce [2, 3] (and its open-source counterpart, Hadoop [9]) have evolved to greatly simplify the processing of large datasets. These frameworks expose a very simple enduser programming model and the underlying execution engine abstracts away most of the details of running applications in a highly scalable manner on large commodity clusters. These simplified models are adequate for performing relational operations and implementing clustering and machine learning algorithms on data that is too large to fit into the main memory of all the nodes in a cluster. However, they are often not optimal for cases when the data does fit into the main memory of the cluster nodes. Additionally, algorithms that are inherently iterative in nature or that are most conveniently expressed in terms of computations on arrays are difficult to express using these simplified programming models. Finally, while the Hadoop ecosystem is extremely vibrant and has developed multiple libraries such as Mahout [7], Pegasus [5] and HAMA for data analysis and machine learning, these currently do not leverage existing, mature scalable linear algebra libraries such as PBLAS and ScaLAPACK that have been developed and refined over several years. In contrast, libraries such as the Message Passing Interface or MPI [7] are ideally suited for efficiently processing in-memory data on large clusters, but are incredibly difficult to program correctly and efficiently. A user of the library has to carefully orchestrate the movement of data between the various parallel processes. Unless this is done with a lot of care, the resulting high performance application will exhibit extremely poor scalability or worse, crash or hang in a non-deterministic manner. The programming abstractions provided by Cloud Numerics do not expose any low-level parallelprocessing constructs. Instead, parallelism is implicit and automatically derived from operations on data types that are provided by the framework such as distributed matrices. These parallel operations in turn map to efficient implementations that in turn utilize existing high-performance libraries such as the PBLAS and ScaLAPACK libraries mentioned above. The rest of this document provides an overview of the programming and runtime execution models in Cloud Numerics and is intended to complement the quick start guide and the library API reference.
optimizations such as zero-copy memory transfers and shared-memory-aware collectives within a single multi-core node. More importantly, array operators in Cloud Numerics can leverage the vast ecosystem of high-performance distributed memory numerical libraries such as ScaLAPACK built on top of MPI.
CloudNumerics Application
System Libraries (Process and Thread Management, Message Passing, Memory Management, Exception Handling, )
Arrays such as the web-connectivity matrix described above are fundamental types in Cloud Numerics. Further, vector operations such as and array operations such as can be expressed succinctly using the Cloud Numerics library and executed efficiently even when the vectors and matrices are very large and are therefore partitioned between several nodes in a cluster. On the flip side, the Cloud Numerics programming model may not be a good fit for algorithms that cannot be easily expressed in array notation (such as relational joins between two data sets) or for applications such as parsing and simplifying petabytes of text data where numerical computation is not the main bottleneck. However, once the data has been boiled-down and transformed to its core numerical structure, it can be further analyzed using Cloud Numerics.
Both local and distributed arrays support the following element types: boolean, one-byte, two-byte, fourbyte and eight-byte signed and unsigned integers, single and double-precision real floating point numbers and single and double-precision complex floating point numbers. The corresponding .NET types are listed below: Array element type System.Boolean System.UInt8 System.Int8 System.UInt16 System.Int16 System.UInt32 System.Int32 System.UInt64 System.Int64 System.Single System.Double Microsoft.Numerics.Complex64 Microsoft.Numerics.Complex128 2.1.2 Indexing and iteration Description Single-byte logical values Single-byte unsigned integers Singled-byte signed integers Two-byte unsigned integers Two-byte signed integers Four-byte unsigned integers Four-byte signed integers Eight-byte unsigned integers Eight-byte signed integers Single-precision IEEE 754 real floating point values Double-precision IEEE 754 real floating point values Single-precision complex values Double-precision complex values Range of values true/false , , , , , , , , , ,
An important interface that IArray<T> in turn implements is the .NET enumerable interface IEnumerable<T>. Therefore, methods in .NET, for instance LINQ operators that accept IEnumerable<T> can be passed instances of Cloud Numerics arrays. 2.1.3 Array broadcasting
In NumPy [2], broadcasting refers to two related concepts: (a) the application of a scalar function to every element of an array producing an array of the same shape and (b) the set of rules governing how two or more input arrays of different shapes are combined under an element-wise operation. In environments such as NumPy and Cloud Numerics that support a rich set of operations on multi-dimensional arrays, broadcasting is often used as a convenient mechanism for implementing certain forms of iteration. Several element-wise operators and functions in Cloud Numerics support broadcasting using conventions similar to NumPy. For example, given a local or distributed array A and a scalar s, Microsoft.Numerics.BasicMath.Sin(A) applies the sin function and A + s adds s to each element of
A. Similarly, for a vector v containing the same number of rows of A, A v subtracts v from every column 1 of A (in other words, Cloud Numerics broadcasts the vector v along the columns of A) .
Dense local arrays in Cloud Numerics are not backed by arrays allocated in the .NET/CLR heap. This is primarily because arrays allocated in the CLR are currently constrained to be smaller than two Gigabytes whereas the ones allocated on the native heap do not have this limitation. Additionally, the underlying native array types in the Cloud Numerics runtime are sufficiently flexible that they can be easily wrapped in environments other than .NET (say Python or R) with very little additional effort. Unlike arrays in .NET that are internally stored in row-major order, multi-dimensional arrays in Cloud Numerics are represented in column-major order that allows for efficient interoperability with highperformance numerical libraries such as BLAS and LAPACK. 2.2.2 Array factory type
The local NumericDenseArray class does not provide any public constructors; users are expected to create instances of local arrays using a factory class, NumericDenseArrayFactory that provides a number of static array creation methods. For example, to create a local array of doubleprecision values filled with zero, one would use: var doubleArray = NumericDenseArrayFactory.Zeros<double>(4, 10, 12); 2.2.3 Interoperability with .NET arrays
As mentioned above, arrays in Cloud Numerics are not backed by .NET arrays; instead, the NumericDenseArrayFactory provides a method to create a new Cloud Numerics array from a .NET array. For example, to create a new local array from a .NET array, one would use: var aDotNetArray = new double[,]{{1.0, 2.0}, {3.0, 4.0}}; var cnArray = NumericDenseArrayFactory.CreateFromSystemArray<double>(aDotNetArray);
In the first version Cloud Numerics only supports broadcasting for scalars, but this deficiency will be addressed for the next release.
Conversely, the NumericDenseArray class provides two methods that return .NET arrays: one for returning a multi-dimensional .NET array of the same shape and with the same elements as the NumericDenseArray instance and other for returning a one-dimensional .NET array that contains the elements flattened in row-major order. These are shown in the code snippets below: double[,] anotherDotNetArray = (double[,]) cnArray.ToSystemArray(); double[] flatDotNetArray = cnArray.ToFlatSystemArray(); 2.2.4 Indexing and iteration with local arrays
A local array in Cloud Numerics can be indexed by a scalar just like any other multi-dimensional array in .NET. Currently, indices are constrained to be scalars and the number of indices must be the same as the number of dimensions of the array as shown in the following examples: double twoDValue = twoDArray[3, 4]; double threeDValue = threeDArray[2, 3, 4]; Similarly, local arrays support iteration over the elements in flattened column-major order. For instance, one could compute the Frobenius norm of a matrix defined as:
using the following LINQ expression: var froNorm = Math.Sqrt(doubleMatrix.Select((value) => value * value).Sum());
2.2.5
Other operations
In addition to the primitive operations shown in this section, local arrays in Cloud Numerics also support operations such as element-wise arithmetic and logical operators, reshaping an existing array to a different shape while preserving the order of elements in memory, transposing an array and filling the elements of an array using a generator function. For example, given a general matrix , one could decompose it into a sum of a symmetric and a skew-symmetric matrix: using: var sym = 0.5 * (matrix + matrix.Transpose()); var skewSym = 0.5 * (matrix - matrix.Transpose());
This section briefly summarizes the partitioning scheme for distributed arrays in Cloud Numerics. Note however that the Cloud Numerics runtime automatically handles the partitioning of arrays between several locales and the user normally does not need to specify any aspects of data distribution. A distributed array is constrained to be partitioned across the locales only along one of its dimensions. The dimension along with an array is partitioned is referred to as its distributed dimension. Each locale then gets a contiguous set of slices of the partitioned array along the distributed dimension; the set of indices along the distributed dimension corresponding to the slices assigned to a particular locale constitute the span of the array on that locale. Finally, each locale always gets complete slices of the distributed array and the slices between two different locales never overlap. The partitioning scheme described above is illustrated in Figure 2 for a matrix that is distributed across three locales, first by rows and second by columns. By default, a multidimensional array is always distributed along its last non-singleton dimension. For instance, a matrix is distributed by columns whereas a vector is distributed by rows.
Span 1 Span 0 Span 2
Span 0
Span 1
Span 2
Generally, methods and static functions in Cloud Numerics that return arrays produce distributed outputs given distributed inputs (the ToLocalArray method and functions that return scalars such as SUM are important exceptions to this rule). The precise manner in which the distribution propagates is as
2
A locale is defined in the section on the execution model, but it can be thought of as a process containing a portion of a distributed array.
follows: if one or more input arguments to a function are arrays, the distributed dimension of the output is the maximum of the distributed dimension of all the inputs. If this distributed dimension is greater than the last non-singleton dimension of the output, the output is distributed along its last non-singleton dimension.
Local arrays are implicitly converted to distributed arrays by assignment. As a consequence any function that only takes a distributed array as an input can be passed a local array and the runtime will transparently promote the local array to be distributed. var lArray = Local.NumericDenseArrayFactory.Create<double>(400, 400); Distributed.NumericDenseArray<double> dArray = lArray; On the other hand, a distributed array cannot be implicitly converted to a local array since they are assumed to be much larger than what can fit into the memory of a local machine. Therefore, converting a distributed array to a local array requires the user to explicitly call the NumericDenseArray<T>.ToLocalArray() method. In a similar fashion, a distributed array can be converted to a .NET array via the NumericDenseArray<T>.ToSystemArray method. var dArray = new Distributed.NumericDenseArray<double>(100, 100, 500); var lArray = dArray.ToLocalArray(); var csArray = dArray.ToSystemArray();
2.5.2
Distributed arrays can be indexed by scalars just like their local counterparts. For example, the following snippet shows indexing into two- and three-dimensional arrays:
Unlike local arrays, distributed arrays do not support iterating over the elements. This is primarily because 3 there is no built-in facility in .NET for iterating over partitioned collections in parallel. 2.5.3 Other distributed array operations
Distributed arrays support almost exactly the same set of operations that are supported by local arrays. For instance, the earlier example of decomposing an array into a symmetric and skew-symmetric matrix: var sym = 0.5 * (matrix + matrix.Transpose()); var skewSym = 0.5 * (matrix - matrix.Transpose()); works unmodified when the argument is a distributed matrix.
Master
Worker 1
Interprocess Comm
Worker 2
Worker 3
Interprocess Comm User Application, CloudNumerics Runtime CloudNumerics Libraries System Libraries User Application, CloudNumerics Runtime CloudNumerics Libraries System Libraries
Figure 3. A distributed Cloud Numerics application executing on a master and three worker processes. The processes are interconnected in a topology referred to as a hypercube.
Figures 4 through 6 illustrate the steps involved in performing a distributed operation such as computing the element-wise sum of two partitioned arrays and . First, the users application dispatches the operation to the master process via the overloaded addition operator.
Master A+B
Figure 4. First step: the user invokes a method on a distributed array on the master process. Then, the master process broadcasts the operation to all workers. The payload consists of both the operation to be performed as well as references to the distributed input arguments.
Master C=Add(A,B) Worker 0
C=Add(A,B)
Worker 2
Worker 1
Figure 5. Second step: The command is broadcast from the master to all workers.
C=Add(A,B)
Next, the master and workers perform a part of the underlying computation in parallel. Often, but not always this involves communicating with other locales to exchange data. For example, in the case of element-wise addition, if the two arguments are partitioned such that each locale has all the data necessary to compute its portion of the output, the operation is done element-wise without any communication. On the other hand if say the first argument is partitioned by rows and the second is partitioned by columns, the master and workers first collectively exchange pieces of the first matrix in order to compute the result.
C C = = Add(A,B) Add(A,B)
C C = = Add(A,B) Add(A,B)
Master
C C = = Add(A,B) Add(A,B)
Worker 0
C C = = Add(A,B) Add(A,B)
Worker 2
Worker 1
Figure 6. The master and workers work on a part of the computation in parallel. In is particular example, the computation can be performed without requiring any communication. Once each worker is finished with its portion of the computation, it sends back a successful completion flag to the master. On the other hand, if it fails (say because of an exception), it sends back a specific failure code along with a serialized exception object describing the failure.
Master OK Worker 0
OK
Worker 2
Worker 1
Figure 7. Once the master and workers are done with the computation, the master collects any errors during computation. In this particular case, all work is completed successfully.
OK
The master process waits for an acknowledgement from each of the workers. If all processes (including it) completed successfully, the master returns a reference to the result back to the users application. On the other hand if one or more workers encountered an error, the master process rethrows the exception in the payload into the application where it can be handled like any other exception in .NET.
Master C
Figure 8. The master then returns the result of the computation to the user as a distributed result.
4 Conclusions
The programming model in Cloud Numerics, based around operations on distributed arrays, simplifies the implementation of algorithms that are most naturally expressed in matrix notation and introduces no additional parallel programming or synchronization constructs. Further, the model, based on a singlethread of execution, greatly simplifies application development and debugging and reduces the possibility of conditions such as races and deadlocks due to bugs in the users application. The Cloud Numerics distributed runtime transparently executes operations on distributed arrays in parallel on nodes in a cluster, leveraging existing mature high-performance libraries when possible. Although the general principles behind the design and implementation of Cloud Numerics are based on well-established fundamentals in high-performance scientific computing, it is still a new and rapidlyevolving framework. It is anticipated that future versions of Cloud Numerics will support: interoperability with Map/Reduce execution frameworks a richer set of functionality on distributed arrays operations on distributed sparse matrices better performance on existing functions more user-friendly application deployment wizard
5 References
1. 2. Blackford, L.S, Choi, J., Cleary, A., et al. ScaLAPACK Users Guide, SIAM, Philadelphia, PA, 1997. Dean, J. and Ghemawat, S., MapReduce: Simplified data processing on large clusters, Symposium on Operating System Design and Implementation (OSDI), 2004. 3. Gregor, D. and Lumsdaine, A. Design and implementation of a high-performance MPI for C# and the Common Language Infrastructure, ACM Principles and Practice of Parallel Programming, pages 133-142, February 2008. 4. Lin, J. and Dyer, C., Data-intensive Text Processing with MapReduce, Morgan-Claypool, 2010. 5. Kang, U., Tsourakakis, C. E. and Faloutsos, C. PEGASUS: A Peta-Scale Graph Mining System Implementation and Observations, IEEE Conference on Data Mining, 2009. 6. Oliphant, T., A Guide to NumPy, Trelgol Publishing, 2005. 7. Owen, S., Anil, R. and Dunning, T. Mahout in Action, Manning Press, 2011. 8. Snir, M. and Gropp, W., MPI The complete reference, MIT Press, Cambridge, MA, 1998. 9. White, T., Hadoop: The Definitive Guide, 2e, OReilly, Sebastopol, CA, 2008. 10. Yu, Y., Isard M., Fetterly, D., et al. DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language. Symposium on Operating System Design and Implementation (OSDI), San Diego, CA, 2008.