Free Google Professional-Data-Engineer Exam Actual Questions

The questions for Professional-Data-Engineer were last updated On May 2, 2024

Question No. 1

Your company built a TensorFlow neural-network model with a large number of neurons and layers. The model fits well for the training dat

a. However, when tested against new data, it performs poorly. What method can you employ to address this?

Show Answer Hide Answer
Correct Answer: C

Reference https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877


Question No. 2

You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?

Show Answer Hide Answer
Question No. 3

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

Show Answer Hide Answer
Correct Answer: C

Question No. 4

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?

Show Answer Hide Answer
Correct Answer: A

Reference https://support.google.com/datastudio/answer/7020039?hl=en


Question No. 5

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

Show Answer Hide Answer
Correct Answer: D