070-475 無料問題集「Microsoft Design and Implement Big Data Analytics Solutions」

You have a Microsoft Azure SQL database that contains Personally Identifiable Information (PII).
To mitigate the PII risk, you need to ensure that data is encrypted while the data is at rest. The solution must minimize any changes to front-end applications.
What should you use?

解説: (JPNTest メンバーにのみ表示されます)
You have the following Hive query.
CREATE TABLE UserVisits (username string, urlvisited string, time date); LOAD DATA INPATH 'wasb:///Logs' OVERWRITE INTO TABLE UserVisits; Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the script.
NOTE: Each correct selection is worth one point.
正解:

Explanation
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure subscription that includes Azure Data Lake and Cognitive Services.
An administrator plans to deploy an Azure Data Factory.
You need to ensure that the administrator can create the data factory.
Solution: You add the user to the Owner role.
Does this meet the goal?

You need to automate the creation of a new Microsoft Azure data factory.
What are three possible technologies that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point

正解:A、B、D 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have four on-premises Microsoft SQL Server data sources as described in the following table.

You plan to create three Azure data factories that will interact with the data sources as described in the following table.

You need to deploy Microsoft Data Management Gateway to support the Azure Data Factory deployment. The solution must use new servers to host the instances of Data Management Gateway.
What is the minimum number of new servers and data management gateways you should you deploy? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: 3
Box 2: 3
Considerations for using gateway
You need to recommend a data transfer solution to support the business goals.
What should you recommend?

You are automating the deployment of a Microsoft Azure Data Factory solution. The data factory will interact with a file stored in Azure Blob storage.
You need to use the REST API to create a linked service to interact with the file.
How should you complete the request body? To answer, drag the appropriate code elements to the correct locations. Each code may be used once, more than once, or not at all. You may need to drag the slit bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:

Explanation
You are designing an Apache HBase cluster on Microsoft Azure HDInsight.
You need to identify which nodes are required for the cluster.
Which three nodes should you identify? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

正解:A、D、F 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to implement a new data warehouse.
You have the following information regarding the data warehouse:
* The first data files for the data warehouse will be available in a few days.
* Most queries that will be executed against the data warehouse are ad-hoc.
* The schemas of data files that will be loaded to the data warehouse change often.
* One month after the planned implementation, the data warehouse will contain 15 TB of data.
You need to recommend a database solution to support the planned implementation.
Solution: You recommend an Apache Hadoop system.
Does this meet the goal?

You plan to create a Microsoft Azure Data Factory pipeline that will connect to an Azure HDInsight cluster that uses Apache Spark.
You need to recommend which file format must be used by the pipeline. The solution must meet the following requirements:
* Store data in the columnar format
* Support compression
Which file format should you recommend?

解説: (JPNTest メンバーにのみ表示されます)

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡