![](https://crypto4nerd.com/wp-content/uploads/2024/02/18mzK14BlqV0AzG6-mnDskg-1024x512.png)
4. Menghitung jumlah piksel
Sementara itu, cara untuk menghitung jumlah piksel yang telah diperoleh dalam setiap kelas adalah seperti berikut ini. Namun, dengan catatan jika kita gunakan point yang di buffer untuk mengambil sampel, jumlah piksel akan diasumsikan sama dengan jumlah point yang dibuat tiap kelasnya.
reduceRegion
: This function computes statistics for each region specified in thegeometry
parameter. It aggregates pixel values intersecting with the given geometry (such as points, lines, polygons) in an image according to the specified reducer.ee.Reducer.count()
: This is the reducer used to count the number of pixels intersecting with the specified geometry. This reducer counts the number of pixels inside each geometry.geometry
: Eachgeometry
parameter specifies the area used as training samples. In this script, there are separate geometries forwater
,vegetation
,builtup
(built-up areas), andbareland
(open land). This determines the areas in the image where pixel count calculations will be performed.scale
: This parameter specifies the nominal scale in meters of the projection used. It’s used to determine the spatial resolution for calculations. This scale parameter affects the area where the reducer calculates statistics. A lower scale increases spatial resolution but may require more computational resources.countWater
,countVegetation
,countBuiltup
,countBareland
: These variables store the results ofreduceRegion
for each specified geometry. They contain the pixel count within the specified area.print
: The print function is used to display the calculated pixel count for each class (water
,vegetation
,builtup
,bareland
) to appear in the console.
The note I can convey is that although the image below shows the pixel count for 8 bands, only the selected bands will be included in the processing.
// Calculate the number of pixels within each class
var countWater = image.reduceRegion({
reducer: ee.Reducer.count(),
geometry: water,
scale: 3 // Adjust the scale based on your image resolution
});var countVegetation = image.reduceRegion({
reducer: ee.Reducer.count(),
geometry: vegetation,
scale: 3
});
var countBuiltup = image.reduceRegion({
reducer: ee.Reducer.count(),
geometry: builtup,
scale: 3
});
var countBareland = image.reduceRegion({
reducer: ee.Reducer.count(),
geometry: bareland,
scale: 3
});
// Print the number of pixels for each class
print('Number of pixels in water:', countWater);
print('Number of pixels in vegetation:', countVegetation);
print('Number of pixels in builtup:', countBuiltup);
print('Number of pixels in bareland:', countBareland);
5. Extracting samples for the image
image.select(bands)
: Selects the predetermined bands for use in the sample extraction process..sampleRegions({})
: This method is used to extract samples from the image at locations corresponding to the previously defined ROI areas.
(a)collection: sample
: Usessample
as the region to extract samples, which have been predefined from the training sample.
(b)properties: ['class']
: Adds the class property['class']
to each extracted sample. This serves to label the class corresponding to each sample.
(c)scale: 4
: Specifies the scale for sample extraction. This is a scaling factor that affects the number of samples extracted per area..randomColumn()
: Adds a random column to the samples. This is often used in the process of splitting data into training and testing sets by dividing samples based on random values in this column.
// Extract sample from image
var extract_lc = image.select(bands).sampleRegions({
collection: sample,
properties: ['class'],
scale: 4
}).randomColumn();
//print(extract_lc);
6. Splitting samples to train and test the classification models
extract_lc
: A collection of samples previously extracted from the image and training samples..filter(ee.Filter.lte('random', 0.7))
: Uses a filter to divide samples into the training set. In this case, samples with a random value less than or equal to 0.7 will be included in the training set (train
)..filter(ee.Filter.gt('random', 0.3))
: Uses a filter to divide samples into the testing set. Samples with a random value greater than 0.3 will be included in the testing set (test
).
This division is performed using the random column previously added to the samples. In this case, approximately 70% of the samples will be used for training the model, while the remaining 30% will be used to test the model’s performance.
// Split train and test
var train = extract_lc.filter(ee.Filter.lte('random', 0.7));
var test = extract_lc.filter(ee.Filter.gt('random', 0.3));
7. Creating the classification model
ee.Classifier.smileRandomForest(50)
: This function creates a classification model using the Random Forest algorithm with 50 decision trees. This algorithm is used to train the classification model based on the provided samples..train()
: This method trains the classification model using the previously created training samples.
(a)features: train
: Uses the training settrain
as the features to train the model.
(b)classProperty: 'class'
: Specifies the class property['class']
as the label or variable to be predicted by the model.
(c)inputProperties: bands
: Uses the list of predetermined bands as input properties for the model.
After training the model using the training samples, print(model.explain())
is used to print the explanation results of the model. This can provide information on how the model performs classification based on the given input.
// Classification model
var model = ee.Classifier.smileRandomForest(50).train({
features: train,
classProperty: 'class',
inputProperties: bands
});
print(model.explain());
8. Menguji model klasifikasi
test.classify(model)
: Utilizes the previously trained classification model to classify the testing set (test
), resulting in classifications for each sample in the testing set.classifiedTest.errorMatrix('class', 'classification')
: This method is used to compute the confusion matrix based on the actual class['class']
and the classified class['classification']
of the classified testing set.print()
: Used to print the confusion matrix along with other evaluation metrics such as overall accuracy, kappa index, user accuracy, and producer accuracy.
// Test model
var classifiedTest = test.classify(model);// Confusion matrix
var cm = classifiedTest.errorMatrix('class', 'classification');
print('Confusion Matrix', cm, 'Overall Accuracy', cm.accuracy(), 'Kappa', cm.kappa());
print('User Accuracy', cm.consumersAccuracy());
print('Producer Accuracy', cm.producersAccuracy());
9. Visualization parameters
values
: Represent the values corresponding to each class generated from the classification. For example, value 1 represents the water class, value 2 represents the vegetation class, and so on.palette
: A list of colors used to display each class in the classification result. For instance, the water class is represented by blue color, vegetation by green, and so forth.names
: A list of class names corresponding to the predefined values. Each class name is associated with the same value in thevalues
list.
// Visualization Parameter
var values = [1, 2, 3, 4];
var palette = ['blue','green', 'red', 'yellow'];
var names = ['Water', 'Vegetation', 'Builtup', 'Bareland'];
10. Apply classification model
image.classify(model, 'lc_class')
: Performs classification using the trained model (model
) on the entire image to be classified (image
). The classification result will be stored in thelc_class
variable..set()
: Used to add properties or metadata to the classification result. In this case, two properties are added:
(a)'lc_class_values': values
: Stores the values representing the classes of the classification result.
(b)'lc_class_palette': palette
: Stores the color palette to be used for visualizing the classification result.
// Apply model
var lc_class = image.classify(model, 'lc_class').set({
'lc_class_values': values,
'lc_class_palette': palette,
});
11. Menampilkan peta sebagai layer
Map.addLayer()
: This function adds an image or layer to the interactive map in GEE.lc_class
: Represents the LULC classification image to be displayed.{ min: 1, max: 4, palette: palette }
: Parameters used for displaying the image.
(a)min: 1
: Minimum value on the color map assigned to the classification result image.
(b)max: 4
: Maximum value on the color map assigned to the classification result image.
(c)palette: palette
: Uses the predefined color palette to map class values (from 1 to 4) to corresponding colors.'Land Cover Bali Strait October 2023'
: Label used to name the displayed LULC classification image on the map.
// Display classified image with defined color palette
Map.addLayer(lc_class, {
min: 1,
max: 4,
palette: palette
}, 'Land Cover Bali Strait October 2023');
12. Display the map legend
Map.add()
: This function is used to add elements to the map.ui.Panel()
: Creates a UI panel containing visual elements to be displayed as the legend.names.map()
: Uses themap
method to create a series of panels containing labels and color boxes for each LULC class.ui.Label('', { width: '30px', height: '15px', border: '0.5px solid black', backgroundColor: palette[index] })
: Creates a small box with the corresponding color from thepalette
and displays the LULC color.ui.Label(name, { height: '15px' })
: Displays the name of the lLULC class.ui.Panel.Layout.flow()
: Sets the panel layout, here the panels are arranged horizontally or vertically.{ position: 'bottom-left' }
: Specifies the position of the legend on the map, in this case, the legend is positioned at the bottom left of the map.
// Legend
Map.add(
ui.Panel(
names.map(function(name, index){
return ui.Panel([
ui.Label('', { width: '30px', height: '15px', border: '0.5px solid black', backgroundColor: palette[index] }),
ui.Label(name, { height: '15px' })
], ui.Panel.Layout.flow('horizontal'));
}),
ui.Panel.Layout.flow('vertical'),
{ position: 'bottom-left' }
)
);
13. Export data to Google Drive
Export.image.toDrive()
: This function is used to export images or data from GEE to Google Drive.image: lc_class
: Specifies the image to be exported, in this case, the LULC classification result (lc_class
) image.description: 'lc_soreang_dec23'
: Description or name for this export process.maxPixels: 1e13
: Maximum limit of pixels for the export process. This value is set to1e13
(10¹³) to avoid restrictions on the maximum number of pixels.folder: 'Google Earth Engine'
: Folder in Google Drive where the export results will be saved.scale: 3
: Spatial scale of the image to be exported, in meters per pixel.fileNamePrefix: 'Land Cover Bali Strait Oct 23'
: File name to be saved in Google Drive.fileFormat: 'GeoTIFF'
: File format of the export result, in this case, GeoTIFF is used because it supports rich spatial metadata and can be used in many GIS software.
// Export
Export.image.toDrive({
image: lc_class,
description: 'lc_balistrait_oct23',
maxPixels: 1e13,
folder: 'Google Earth Engine',
scale: 3,
fileNamePrefix: 'Land Cover Bali Strait Oct 23',
fileFormat: 'GeoTIFF'
});