首先,确定需要导入的列是否是数字类型。如果是数字类型,可以尝试在Glue作业中将此字段的数据类型更改为字符串类型,并重新运行作业。如果此方法仍然无法解决问题,则可以尝试使用以下代码片段将数字列转换为字符串类型:
from awsglue.dynamicframe import DynamicFrame
from awsglue.utils import getResolvedOptions
from pyspark.sql.functions import concat_ws
from pyspark.sql.types import StringType
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
# 从原始数据源中读取数据
source_df = glueContext.create_dynamic_frame.from_catalog(database="mydb", table_name="mytable").toDF()
# 将数字列转换为字符串类型
source_df = source_df.withColumn("numeric_column", source_df["numeric_column"].cast(StringType()))
# 将数据写入RDS
target_df = source_df.select(concat_ws(",", *source_df.columns)).rdd.map(lambda x: x[0])
sc.parallelize(target_df).saveAsTextFile("s3://mybucket/myfile.csv")
# 将输出文件加载到RDS中
sql_statement = """
COPY mytable
FROM 's3://mybucket/myfile.csv'
CREDENTIALS 'aws_iam_role=ARN_OF_IAM_ROLE'
CSV QUOTE '\"'
DELIMITER ','
IGNOREHEADER 1
;"""
rds_endpoint = "my-rds-instance-identifier.myregion.rds.amazonaws.com"
rds_port = 5432
rds_dbname = "mydb"
rds_username = "myuser"
rds_password = "mypassword"
connection = psycopg2.connect(
host=rds_endpoint,
port=rds_port,
dbname=rds_dbname,
user=rds_username,
password=rds_password
)
cursor = connection.cursor()
cursor.execute(sql_statement)
connection.commit()
代码中,首先从原始数据源中读取数据,并使用cast
函数将数字列转换为字符串类型。然后将数据写入S3 bucket中,并使用COPY
语句将输出文件